Re: [petsc-users] Galerkin projection using petsc4py

2023-10-11 Thread Thanasis Boutsikakis
Very good catch Pierre, thanks a lot!

This made everything work: the two-step process and the ptap(). I mistakenly 
thought that I should not let the local number of columns to be None, since the 
matrix is only partitioned row-wise. Could you please explain what happened 
because of my setting the local column number so that I get the philosophy 
behind this partitioning?

Thanks again,
Thanos

> On 11 Oct 2023, at 09:04, Pierre Jolivet  wrote:
> 
> That’s because:
> size = ((None, global_rows), (global_cols, global_cols)) 
> should be:
> size = ((None, global_rows), (None, global_cols)) 
> Then, it will work.
> $ ~/repo/petsc/arch-darwin-c-debug-real/bin/mpirun -n 4 python3.12 test.py && 
> echo $?
> 0
> 
> Thanks,
> Pierre
> 
>> On 11 Oct 2023, at 8:58 AM, Thanasis Boutsikakis 
>>  wrote:
>> 
>> Pierre, I see your point, but my experiment shows that it does not even run 
>> due to size mismatch, so I don’t see how being sparse would change things 
>> here. There must be some kind of problem with the parallel ptap(), because 
>> it does run sequentially. In order to test that, I changed the flags of the 
>> matrix creation to sparse=True and ran it again. Here is the code
>> 
>> """Experimenting with PETSc mat-mat multiplication"""
>> 
>> import numpy as np
>> from firedrake import COMM_WORLD
>> from firedrake.petsc import PETSc
>> 
>> from utilities import Print
>> 
>> nproc = COMM_WORLD.size
>> rank = COMM_WORLD.rank
>> 
>> 
>> def create_petsc_matrix(input_array, sparse=True):
>> """Create a PETSc matrix from an input_array
>> 
>> Args:
>> input_array (np array): Input array
>> partition_like (PETSc mat, optional): Petsc matrix. Defaults to None.
>> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
>> True.
>> 
>> Returns:
>> PETSc mat: PETSc mpi matrix
>> """
>> # Check if input_array is 1D and reshape if necessary
>> assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
>> global_rows, global_cols = input_array.shape
>> size = ((None, global_rows), (global_cols, global_cols))
>> 
>> # Create a sparse or dense matrix based on the 'sparse' argument
>> if sparse:
>> matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
>> else:
>> matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
>> matrix.setUp()
>> 
>> local_rows_start, local_rows_end = matrix.getOwnershipRange()
>> 
>> for counter, i in enumerate(range(local_rows_start, local_rows_end)):
>> # Calculate the correct row in the array for the current process
>> row_in_array = counter + local_rows_start
>> matrix.setValues(
>> i, range(global_cols), input_array[row_in_array, :], addv=False
>> )
>> 
>> # Assembly the matrix to compute the final structure
>> matrix.assemblyBegin()
>> matrix.assemblyEnd()
>> 
>> return matrix
>> 
>> 
>> # 
>> # EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc matrix 
>> Phi
>> #  A' = Phi.T * A * Phi
>> # [k x k] <- [k x m] x [m x m] x [m x k]
>> # 
>> 
>> m, k = 100, 7
>> # Generate the random numpy matrices
>> np.random.seed(0)  # sets the seed to 0
>> A_np = np.random.randint(low=0, high=6, size=(m, m))
>> Phi_np = np.random.randint(low=0, high=6, size=(m, k))
>> 
>> # 
>> # TEST: Galerking projection of numpy matrices A_np and Phi_np
>> # 
>> Aprime_np = Phi_np.T @ A_np @ Phi_np
>> 
>> # Create A as an mpi matrix distributed on each process
>> A = create_petsc_matrix(A_np, sparse=True)
>> 
>> # Create Phi as an mpi matrix distributed on each process
>> Phi = create_petsc_matrix(Phi_np, sparse=True)
>> 
>> # Create an empty PETSc matrix object to store the result of the PtAP 
>> operation.
>> # This will hold the result A' = Phi.T * A * Phi after the computation.
>> A_prime = create_petsc_matrix(np.zeros((k, k)), sparse=True)
>> 
>> # Perform the PtAP (Phi Transpose times A times Phi) operation.
>> # In mathematical terms, this operation is A' = Phi.T * A * Phi.
>> # A_prime will store the result of the operation.
>> A_prime = A.

Re: [petsc-users] Galerkin projection using petsc4py

2023-10-11 Thread Thanasis Boutsikakis
Furthermore, I tried to perform the Galerkin projection in two steps by 
substituting

> A_prime = A.ptap(Phi)

With 

AL = Phi.transposeMatMult(A)
A_prime = AL.matMult(Phi)

And running this with 3 procs, results to the false creation of a matrix AL 
that has 3 times bigger dimensions that it should (A is of size 100x100 and Phi 
of size 100x7):

MATRIX mpiaij AL [21x300]
Assembled

Partitioning for AL:
  Rank 0: Rows [0, 7)
  Rank 1: Rows [7, 14)
  Rank 2: Rows [14, 21)

And naturally, in another dimension incompatibility:

Traceback (most recent call last):
  File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", line 
85, in 
Traceback (most recent call last):
  File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", line 
85, in 
A_prime = AL.matMult(Phi)
A_prime = AL.matMult(Phi)
  ^^^
  File "petsc4py/PETSc/Mat.pyx", line 1492, in petsc4py.PETSc.Mat.matMult
  ^^^
  File "petsc4py/PETSc/Mat.pyx", line 1492, in petsc4py.PETSc.Mat.matMult
Traceback (most recent call last):
  File "/Users/boutsitron/petsc-experiments/mat_vec_multiplication2.py", line 
85, in 
petsc4py.PETSc.Error: error code 60
[2] MatMatMult() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:10053
[2] MatProduct_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9976
[2] MatProductSetFromOptions() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
[2] MatProductSetFromOptions_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:421
[2] Nonconforming object sizes
[2] Matrix dimensions of A and B are incompatible for MatProductType AB: A 
21x300, B 100x7
Abort(1) on node 2 (rank 2 in comm 496): application called 
MPI_Abort(PYOP2_COMM_WORLD, 1) - process 2
petsc4py.PETSc.Error: error code 60
[1] MatMatMult() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:10053
[1] MatProduct_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9976
[1] MatProductSetFromOptions() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
[1] MatProductSetFromOptions_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:421
[1] Nonconforming object sizes
[1] Matrix dimensions of A and B are incompatible for MatProductType AB: A 
21x300, B 100x7
Abort(1) on node 1 (rank 1 in comm 496): application called 
MPI_Abort(PYOP2_COMM_WORLD, 1) - process 1
A_prime = AL.matMult(Phi)
  ^^^
  File "petsc4py/PETSc/Mat.pyx", line 1492, in petsc4py.PETSc.Mat.matMult
petsc4py.PETSc.Error: error code 60
[0] MatMatMult() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:10053
[0] MatProduct_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9976
[0] MatProductSetFromOptions() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
[0] MatProductSetFromOptions_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:421
[0] Nonconforming object sizes
[0] Matrix dimensions of A and B are incompatible for MatProductType AB: A 
21x300, B 100x7
Abort(1) on node 0 (rank 0 in comm 496): application called 
MPI_Abort(PYOP2_COMM_WORLD, 1) - process 0

> On 10 Oct 2023, at 23:33, Thanasis Boutsikakis 
>  wrote:
> 
> Hi all,
> 
> Revisiting my code and the proposed solution from Pierre, I realized this 
> works only in sequential. The reason is that PETSc partitions those matrices 
> only row-wise, which leads to an error due to the mismatch between number of 
> columns of A (non-partitioned) and the number of rows of Phi (partitioned).
> 
> """Experimenting with PETSc mat-mat multiplication"""
> 
> import time
> 
> import numpy as np
> from colorama import Fore
> from firedrake import COMM_SELF, COMM_WORLD
> from firedrake.petsc import PETSc
> from mpi4py import MPI
> from numpy.testing import assert_array_almost_equal
> 
> from utilities import Print
> 
> nproc = COMM_WORLD.size
> rank = COMM_WORLD.rank
> 
> def create_petsc_matrix(input_array, sparse=True):
> """Create a PETSc matrix from an input_array
> 
> Args:
> input_array (np array): Input array
> partition_like (PETSc mat, optional): Petsc matrix. Defaults to None.
> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
> True.
> 
> Returns:
> PETSc mat: PETSc mpi matrix
> """
> # Check if input_array is 1D and reshape if necessary
> assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
> global_rows, global_cols = input_array.shape
> size = ((None, global_rows), (global_co

Re: [petsc-users] Galerkin projection using petsc4py

2023-10-11 Thread Thanasis Boutsikakis
/src/petsc/src/mat/interface/matrix.c:9896
[1] MatProductSetFromOptions() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
[1] MatProductSetFromOptions_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:435
[1] MatProductSetFromOptions_MPIAIJ() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2372
[1] MatProductSetFromOptions_MPIAIJ_PtAP() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2266
[1] Nonconforming object sizes
[1] Matrix local dimensions are incompatible, Acol (100, 200) != Prow (34,67)
Abort(1) on node 1 (rank 1 in comm 496): application called 
MPI_Abort(PYOP2_COMM_WORLD, 1) - process 1
petsc4py.PETSc.Error: error code 60
[2] MatPtAP() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9896
[2] MatProductSetFromOptions() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
[2] MatProductSetFromOptions_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:435
[2] MatProductSetFromOptions_MPIAIJ() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2372
[2] MatProductSetFromOptions_MPIAIJ_PtAP() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2266
[2] Nonconforming object sizes
[2] Matrix local dimensions are incompatible, Acol (200, 300) != Prow (67,100)
Abort(1) on node 2 (rank 2 in comm 496): application called 
MPI_Abort(PYOP2_COMM_WORLD, 1) - process 2

> On 11 Oct 2023, at 07:18, Pierre Jolivet  wrote:
> 
> I disagree with what Mark and Matt are saying: your code is fine, the error 
> message is fine, petsc4py is fine (in this instance).
> It’s not a typical use case of MatPtAP(), which is mostly designed for 
> MatAIJ, not MatDense.
> On the one hand, in the MatDense case, indeed there will be a mismatch 
> between the number of columns of A and the number of rows of P, as written in 
> the error message.
> On the other hand, there is not much to optimize when computing C = P’ A P 
> with everything being dense.
> I would just write this as B = A P and then C = P’ B (but then you may face 
> the same issue as initially reported, please let us know then).
> 
> Thanks,
> Pierre
> 
>> On 11 Oct 2023, at 2:42 AM, Mark Adams  wrote:
>> 
>> This looks like a false positive or there is some subtle bug here that we 
>> are not seeing.
>> Could this be the first time parallel PtAP has been used (and reported) in 
>> petsc4py?
>> 
>> Mark
>> 
>> On Tue, Oct 10, 2023 at 8:27 PM Matthew Knepley > <mailto:knep...@gmail.com>> wrote:
>>> On Tue, Oct 10, 2023 at 5:34 PM Thanasis Boutsikakis 
>>> >> <mailto:thanasis.boutsika...@corintis.com>> wrote:
>>>> Hi all,
>>>> 
>>>> Revisiting my code and the proposed solution from Pierre, I realized this 
>>>> works only in sequential. The reason is that PETSc partitions those 
>>>> matrices only row-wise, which leads to an error due to the mismatch 
>>>> between number of columns of A (non-partitioned) and the number of rows of 
>>>> Phi (partitioned).
>>> 
>>> Are you positive about this? P^T A P is designed to run in this scenario, 
>>> so either we have a bug or the diagnosis is wrong.
>>> 
>>>   Thanks,
>>> 
>>>  Matt
>>>  
>>>> """Experimenting with PETSc mat-mat multiplication"""
>>>> 
>>>> import time
>>>> 
>>>> import numpy as np
>>>> from colorama import Fore
>>>> from firedrake import COMM_SELF, COMM_WORLD
>>>> from firedrake.petsc import PETSc
>>>> from mpi4py import MPI
>>>> from numpy.testing import assert_array_almost_equal
>>>> 
>>>> from utilities import Print
>>>> 
>>>> nproc = COMM_WORLD.size
>>>> rank = COMM_WORLD.rank
>>>> 
>>>> def create_petsc_matrix(input_array, sparse=True):
>>>> """Create a PETSc matrix from an input_array
>>>> 
>>>> Args:
>>>> input_array (np array): Input array
>>>> partition_like (PETSc mat, optional): Petsc matrix. Defaults to 
>>>> None.
>>>> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
>>>> True.
>>>> 
>>>> Returns:
>>>> PETSc mat: PETSc mpi matrix
>>>> """
>>>> # Check if input_array is 1D and reshape if necessary
>>>> assert len(inp

Re: [petsc-users] Galerkin projection using petsc4py

2023-10-10 Thread Thanasis Boutsikakis
Hi all,

Revisiting my code and the proposed solution from Pierre, I realized this works 
only in sequential. The reason is that PETSc partitions those matrices only 
row-wise, which leads to an error due to the mismatch between number of columns 
of A (non-partitioned) and the number of rows of Phi (partitioned).

"""Experimenting with PETSc mat-mat multiplication"""

import time

import numpy as np
from colorama import Fore
from firedrake import COMM_SELF, COMM_WORLD
from firedrake.petsc import PETSc
from mpi4py import MPI
from numpy.testing import assert_array_almost_equal

from utilities import Print

nproc = COMM_WORLD.size
rank = COMM_WORLD.rank

def create_petsc_matrix(input_array, sparse=True):
"""Create a PETSc matrix from an input_array

Args:
input_array (np array): Input array
partition_like (PETSc mat, optional): Petsc matrix. Defaults to None.
sparse (bool, optional): Toggle for sparese or dense. Defaults to True.

Returns:
PETSc mat: PETSc mpi matrix
"""
# Check if input_array is 1D and reshape if necessary
assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
global_rows, global_cols = input_array.shape
size = ((None, global_rows), (global_cols, global_cols))

# Create a sparse or dense matrix based on the 'sparse' argument
if sparse:
matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
else:
matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
matrix.setUp()

local_rows_start, local_rows_end = matrix.getOwnershipRange()

for counter, i in enumerate(range(local_rows_start, local_rows_end)):
# Calculate the correct row in the array for the current process
row_in_array = counter + local_rows_start
matrix.setValues(
i, range(global_cols), input_array[row_in_array, :], addv=False
)

# Assembly the matrix to compute the final structure
matrix.assemblyBegin()
matrix.assemblyEnd()

return matrix

# 
# EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc matrix Phi
#  A' = Phi.T * A * Phi
# [k x k] <- [k x m] x [m x m] x [m x k]
# 

m, k = 100, 7
# Generate the random numpy matrices
np.random.seed(0)  # sets the seed to 0
A_np = np.random.randint(low=0, high=6, size=(m, m))
Phi_np = np.random.randint(low=0, high=6, size=(m, k))

# 
# TEST: Galerking projection of numpy matrices A_np and Phi_np
# 
Aprime_np = Phi_np.T @ A_np @ Phi_np
Print(f"MATRIX Aprime_np [{Aprime_np.shape[0]}x{Aprime_np.shape[1]}]")
Print(f"{Aprime_np}")

# Create A as an mpi matrix distributed on each process
A = create_petsc_matrix(A_np, sparse=False)

# Create Phi as an mpi matrix distributed on each process
Phi = create_petsc_matrix(Phi_np, sparse=False)

# Create an empty PETSc matrix object to store the result of the PtAP operation.
# This will hold the result A' = Phi.T * A * Phi after the computation.
A_prime = create_petsc_matrix(np.zeros((k, k)), sparse=False)

# Perform the PtAP (Phi Transpose times A times Phi) operation.
# In mathematical terms, this operation is A' = Phi.T * A * Phi.
# A_prime will store the result of the operation.
A_prime = A.ptap(Phi)

Here is the error

MATRIX mpiaij A [100x100]
Assembled

Partitioning for A:
  Rank 0: Rows [0, 34)
  Rank 1: Rows [34, 67)
  Rank 2: Rows [67, 100)

MATRIX mpiaij Phi [100x7]
Assembled

Partitioning for Phi:
  Rank 0: Rows [0, 34)
  Rank 1: Rows [34, 67)
  Rank 2: Rows [67, 100)

Traceback (most recent call last):
  File "/Users/boutsitron/work/galerkin_projection.py", line 87, in 
A_prime = A.ptap(Phi)
  ^^^
  File "petsc4py/PETSc/Mat.pyx", line 1525, in petsc4py.PETSc.Mat.ptap
petsc4py.PETSc.Error: error code 60
[0] MatPtAP() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9896
[0] MatProductSetFromOptions() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:541
[0] MatProductSetFromOptions_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matproduct.c:435
[0] MatProductSetFromOptions_MPIAIJ() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2372
[0] MatProductSetFromOptions_MPIAIJ_PtAP() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/impls/aij/mpi/mpimatmatmult.c:2266
[0] Nonconforming object sizes
[0] Matrix local dimensions are incompatible, Acol (0, 100) != Prow (0,34)
Abort(1) on node 0 (rank 0 in comm 496): application called 
MPI_Abort(PYOP2_COMM_WORLD, 1) - process 0

Any thoughts?

Thanks,
Thanos

> On 5 Oct 2023, at 14:23, Thanasis Boutsikakis 
>  wrote:
> 
> This works Pierre. Amazing input, thanks a lot!
> 
>> 

Re: [petsc-users] Galerkin projection using petsc4py

2023-10-05 Thread Thanasis Boutsikakis
This works Pierre. Amazing input, thanks a lot!

> On 5 Oct 2023, at 14:17, Pierre Jolivet  wrote:
> 
> Not a petsc4py expert here, but you may to try instead:
> A_prime = A.ptap(Phi)
> 
> Thanks,
> Pierre
> 
>> On 5 Oct 2023, at 2:02 PM, Thanasis Boutsikakis 
>>  wrote:
>> 
>> Thanks Pierre! So I tried this and got a segmentation fault. Is this 
>> supposed to work right off the bat or am I missing sth?
>> 
>> [0]PETSC ERROR: 
>> 
>> [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
>> probably memory access out of range
>> [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
>> [0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and 
>> https://petsc.org/release/faq/
>> [0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and 
>> run
>> [0]PETSC ERROR: to get more information on the crash.
>> [0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is 
>> causing the crash.
>> Abort(59) on node 0 (rank 0 in comm 0): application called 
>> MPI_Abort(MPI_COMM_WORLD, 59) - process 0
>> 
>> """Experimenting with PETSc mat-mat multiplication"""
>> 
>> import time
>> 
>> import numpy as np
>> from colorama import Fore
>> from firedrake import COMM_SELF, COMM_WORLD
>> from firedrake.petsc import PETSc
>> from mpi4py import MPI
>> from numpy.testing import assert_array_almost_equal
>> 
>> from utilities import (
>> Print,
>> create_petsc_matrix,
>> print_matrix_partitioning,
>> )
>> 
>> nproc = COMM_WORLD.size
>> rank = COMM_WORLD.rank
>> 
>> # 
>> # EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc matrix 
>> Phi
>> #  A' = Phi.T * A * Phi
>> # [k x k] <- [k x m] x [m x m] x [m x k]
>> # 
>> 
>> m, k = 11, 7
>> # Generate the random numpy matrices
>> np.random.seed(0)  # sets the seed to 0
>> A_np = np.random.randint(low=0, high=6, size=(m, m))
>> Phi_np = np.random.randint(low=0, high=6, size=(m, k))
>> 
>> # 
>> # TEST: Galerking projection of numpy matrices A_np and Phi_np
>> # 
>> Aprime_np = Phi_np.T @ A_np @ Phi_np
>> Print(f"MATRIX Aprime_np [{Aprime_np.shape[0]}x{Aprime_np.shape[1]}]")
>> Print(f"{Aprime_np}")
>> 
>> # Create A as an mpi matrix distributed on each process
>> A = create_petsc_matrix(A_np, sparse=False)
>> 
>> # Create Phi as an mpi matrix distributed on each process
>> Phi = create_petsc_matrix(Phi_np, sparse=False)
>> 
>> # Create an empty PETSc matrix object to store the result of the PtAP 
>> operation.
>> # This will hold the result A' = Phi.T * A * Phi after the computation.
>> A_prime = create_petsc_matrix(np.zeros((k, k)), sparse=False)
>> 
>> # Perform the PtAP (Phi Transpose times A times Phi) operation.
>> # In mathematical terms, this operation is A' = Phi.T * A * Phi.
>> # A_prime will store the result of the operation.
>> Phi.PtAP(A, A_prime)
>> 
>>> On 5 Oct 2023, at 13:22, Pierre Jolivet  wrote:
>>> 
>>> How about using ptap which will use MatPtAP?
>>> It will be more efficient (and it will help you bypass the issue).
>>> 
>>> Thanks,
>>> Pierre
>>> 
>>>> On 5 Oct 2023, at 1:18 PM, Thanasis Boutsikakis 
>>>>  wrote:
>>>> 
>>>> Sorry, forgot function create_petsc_matrix()
>>>> 
>>>> def create_petsc_matrix(input_array sparse=True):
>>>> """Create a PETSc matrix from an input_array
>>>> 
>>>> Args:
>>>> input_array (np array): Input array
>>>> partition_like (PETSc mat, optional): Petsc matrix. Defaults to 
>>>> None.
>>>> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
>>>> True.
>>>> 
>>>> Returns:
>>>> PETSc mat: PETSc matrix
>>>> """
>>>> # Check if input_array is 1D and reshape if necessary
>>>> assert len(input_array.shape) == 2, "Input array should be 
>>>> 2-dimensional"
>>>> global_rows, global_cols = input_array.shape
>>>>

Re: [petsc-users] Galerkin projection using petsc4py

2023-10-05 Thread Thanasis Boutsikakis
Thanks Pierre! So I tried this and got a segmentation fault. Is this supposed 
to work right off the bat or am I missing sth?

[0]PETSC ERROR: 

[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably 
memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and 
https://petsc.org/release/faq/
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash.
[0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is causing 
the crash.
Abort(59) on node 0 (rank 0 in comm 0): application called 
MPI_Abort(MPI_COMM_WORLD, 59) - process 0

"""Experimenting with PETSc mat-mat multiplication"""

import time

import numpy as np
from colorama import Fore
from firedrake import COMM_SELF, COMM_WORLD
from firedrake.petsc import PETSc
from mpi4py import MPI
from numpy.testing import assert_array_almost_equal

from utilities import (
Print,
create_petsc_matrix,
print_matrix_partitioning,
)

nproc = COMM_WORLD.size
rank = COMM_WORLD.rank

# 
# EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc matrix Phi
#  A' = Phi.T * A * Phi
# [k x k] <- [k x m] x [m x m] x [m x k]
# 

m, k = 11, 7
# Generate the random numpy matrices
np.random.seed(0)  # sets the seed to 0
A_np = np.random.randint(low=0, high=6, size=(m, m))
Phi_np = np.random.randint(low=0, high=6, size=(m, k))

# 
# TEST: Galerking projection of numpy matrices A_np and Phi_np
# 
Aprime_np = Phi_np.T @ A_np @ Phi_np
Print(f"MATRIX Aprime_np [{Aprime_np.shape[0]}x{Aprime_np.shape[1]}]")
Print(f"{Aprime_np}")

# Create A as an mpi matrix distributed on each process
A = create_petsc_matrix(A_np, sparse=False)

# Create Phi as an mpi matrix distributed on each process
Phi = create_petsc_matrix(Phi_np, sparse=False)

# Create an empty PETSc matrix object to store the result of the PtAP operation.
# This will hold the result A' = Phi.T * A * Phi after the computation.
A_prime = create_petsc_matrix(np.zeros((k, k)), sparse=False)

# Perform the PtAP (Phi Transpose times A times Phi) operation.
# In mathematical terms, this operation is A' = Phi.T * A * Phi.
# A_prime will store the result of the operation.
Phi.PtAP(A, A_prime)

> On 5 Oct 2023, at 13:22, Pierre Jolivet  wrote:
> 
> How about using ptap which will use MatPtAP?
> It will be more efficient (and it will help you bypass the issue).
> 
> Thanks,
> Pierre
> 
>> On 5 Oct 2023, at 1:18 PM, Thanasis Boutsikakis 
>>  wrote:
>> 
>> Sorry, forgot function create_petsc_matrix()
>> 
>> def create_petsc_matrix(input_array sparse=True):
>> """Create a PETSc matrix from an input_array
>> 
>> Args:
>> input_array (np array): Input array
>> partition_like (PETSc mat, optional): Petsc matrix. Defaults to None.
>> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
>> True.
>> 
>> Returns:
>> PETSc mat: PETSc matrix
>> """
>> # Check if input_array is 1D and reshape if necessary
>> assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
>> global_rows, global_cols = input_array.shape
>> 
>> size = ((None, global_rows), (global_cols, global_cols))
>> 
>> # Create a sparse or dense matrix based on the 'sparse' argument
>> if sparse:
>> matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
>> else:
>> matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
>> matrix.setUp()
>> 
>> local_rows_start, local_rows_end = matrix.getOwnershipRange()
>> 
>> for counter, i in enumerate(range(local_rows_start, local_rows_end)):
>> # Calculate the correct row in the array for the current process
>> row_in_array = counter + local_rows_start
>> matrix.setValues(
>> i, range(global_cols), input_array[row_in_array, :], addv=False
>> )
>> 
>> # Assembly the matrix to compute the final structure
>> matrix.assemblyBegin()
>> matrix.assemblyEnd()
>> 
>> return matrix
>> 
>>> On 5 Oct 2023, at 13:09, Thanasis Boutsikakis 
>>>  wrote:
>>> 
>>> Hi everyone,
>>> 
>>> I am trying a Galerkin projection (see MFE below) and I cannot get the

Re: [petsc-users] Galerkin projection using petsc4py

2023-10-05 Thread Thanasis Boutsikakis
Sorry, forgot function create_petsc_matrix()

def create_petsc_matrix(input_array sparse=True):
"""Create a PETSc matrix from an input_array

Args:
input_array (np array): Input array
partition_like (PETSc mat, optional): Petsc matrix. Defaults to None.
sparse (bool, optional): Toggle for sparese or dense. Defaults to True.

Returns:
PETSc mat: PETSc matrix
"""
# Check if input_array is 1D and reshape if necessary
assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
global_rows, global_cols = input_array.shape

size = ((None, global_rows), (global_cols, global_cols))

# Create a sparse or dense matrix based on the 'sparse' argument
if sparse:
matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
else:
matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
matrix.setUp()

local_rows_start, local_rows_end = matrix.getOwnershipRange()

for counter, i in enumerate(range(local_rows_start, local_rows_end)):
# Calculate the correct row in the array for the current process
row_in_array = counter + local_rows_start
matrix.setValues(
i, range(global_cols), input_array[row_in_array, :], addv=False
)

# Assembly the matrix to compute the final structure
matrix.assemblyBegin()
matrix.assemblyEnd()

return matrix

> On 5 Oct 2023, at 13:09, Thanasis Boutsikakis 
>  wrote:
> 
> Hi everyone,
> 
> I am trying a Galerkin projection (see MFE below) and I cannot get the 
> Phi.transposeMatMult(A, A1) work. The error is
> 
> Phi.transposeMatMult(A, A1)
>   File "petsc4py/PETSc/Mat.pyx", line 1514, in 
> petsc4py.PETSc.Mat.transposeMatMult
> petsc4py.PETSc.Error: error code 56
> [0] MatTransposeMatMult() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:10135
> [0] MatProduct_Private() at 
> /Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9989
> [0] No support for this operation for this object type
> [0] Call MatProductCreate() first
> 
> Do you know if these exposed to petsc4py or maybe there is another way? I 
> cannot get the MFE to work (neither in sequential nor in parallel)
> 
> """Experimenting with PETSc mat-mat multiplication"""
> 
> import time
> 
> import numpy as np
> from colorama import Fore
> from firedrake import COMM_SELF, COMM_WORLD
> from firedrake.petsc import PETSc
> from mpi4py import MPI
> from numpy.testing import assert_array_almost_equal
> 
> from utilities import (
> Print,
> create_petsc_matrix,
> )
> 
> nproc = COMM_WORLD.size
> rank = COMM_WORLD.rank
> 
> # 
> # EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc matrix 
> Phi
> #  A' = Phi.T * A * Phi
> # [k x k] <- [k x m] x [m x m] x [m x k]
> # 
> 
> m, k = 11, 7
> # Generate the random numpy matrices
> np.random.seed(0)  # sets the seed to 0
> A_np = np.random.randint(low=0, high=6, size=(m, m))
> Phi_np = np.random.randint(low=0, high=6, size=(m, k))
> 
> # Create A as an mpi matrix distributed on each process
> A = create_petsc_matrix(A_np)
> 
> # Create Phi as an mpi matrix distributed on each process
> Phi = create_petsc_matrix(Phi_np)
> 
> A1 = create_petsc_matrix(np.zeros((k, m)))
> 
> # Now A1 contains the result of Phi^T * A
> Phi.transposeMatMult(A, A1)
> 



[petsc-users] Galerkin projection using petsc4py

2023-10-05 Thread Thanasis Boutsikakis
Hi everyone,

I am trying a Galerkin projection (see MFE below) and I cannot get the 
Phi.transposeMatMult(A, A1) work. The error is

Phi.transposeMatMult(A, A1)
  File "petsc4py/PETSc/Mat.pyx", line 1514, in 
petsc4py.PETSc.Mat.transposeMatMult
petsc4py.PETSc.Error: error code 56
[0] MatTransposeMatMult() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:10135
[0] MatProduct_Private() at 
/Users/boutsitron/firedrake/src/petsc/src/mat/interface/matrix.c:9989
[0] No support for this operation for this object type
[0] Call MatProductCreate() first

Do you know if these exposed to petsc4py or maybe there is another way? I 
cannot get the MFE to work (neither in sequential nor in parallel)

"""Experimenting with PETSc mat-mat multiplication"""

import time

import numpy as np
from colorama import Fore
from firedrake import COMM_SELF, COMM_WORLD
from firedrake.petsc import PETSc
from mpi4py import MPI
from numpy.testing import assert_array_almost_equal

from utilities import (
Print,
create_petsc_matrix,
)

nproc = COMM_WORLD.size
rank = COMM_WORLD.rank

# 
# EXP: Galerkin projection of an mpi PETSc matrix A with an mpi PETSc matrix Phi
#  A' = Phi.T * A * Phi
# [k x k] <- [k x m] x [m x m] x [m x k]
# 

m, k = 11, 7
# Generate the random numpy matrices
np.random.seed(0)  # sets the seed to 0
A_np = np.random.randint(low=0, high=6, size=(m, m))
Phi_np = np.random.randint(low=0, high=6, size=(m, k))

# Create A as an mpi matrix distributed on each process
A = create_petsc_matrix(A_np)

# Create Phi as an mpi matrix distributed on each process
Phi = create_petsc_matrix(Phi_np)

A1 = create_petsc_matrix(np.zeros((k, m)))

# Now A1 contains the result of Phi^T * A
Phi.transposeMatMult(A, A1)



[petsc-users] Concatenation of local-to-global matrix

2023-10-03 Thread Thanasis Boutsikakis
I am trying to multiply a sequential PETsc matrix with an mpi PETSc matrix in 
parallel. The final step is to concatenate the product matrix, which is a local 
sequential PETSc matrix that is different for every proc, so that I get the 
full mpi matrix as a result. This has proven to work, but setting the values 
one by one using a loop is very inefficient and slow.

In the following MFE, I am trying to make this concatenation more efficient by 
setting the values in batches. However it doesn’t work and I am wondering why:

"""Experimenting with PETSc mat-mat multiplication"""

import time

import numpy as np
from colorama import Fore
from firedrake import COMM_SELF, COMM_WORLD
from firedrake.petsc import PETSc
from mpi4py import MPI
from numpy.testing import assert_array_almost_equal

from utilities import Print

size = COMM_WORLD.size
rank = COMM_WORLD.rank

def create_petsc_matrix(input_array, sparse=True):
"""Create a PETSc matrix from an input_array

Args:
input_array (np array): Input array
partition_like (PETSc mat, optional): Petsc matrix. Defaults to None.
sparse (bool, optional): Toggle for sparese or dense. Defaults to True.

Returns:
PETSc mat: PETSc matrix
"""
# Check if input_array is 1D and reshape if necessary
assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
global_rows, global_cols = input_array.shape

size = ((None, global_rows), (global_cols, global_cols))

# Create a sparse or dense matrix based on the 'sparse' argument
if sparse:
matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
else:
matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
matrix.setUp()

local_rows_start, local_rows_end = matrix.getOwnershipRange()

for counter, i in enumerate(range(local_rows_start, local_rows_end)):
# Calculate the correct row in the array for the current process
row_in_array = counter + local_rows_start
matrix.setValues(
i, range(global_cols), input_array[row_in_array, :], addv=False
)

# Assembly the matrix to compute the final structure
matrix.assemblyBegin()
matrix.assemblyEnd()

return matrix

def get_local_submatrix(A):
"""Get the local submatrix of A

Args:
A (mpi PETSc mat): partitioned PETSc matrix

Returns:
seq mat: PETSc matrix
"""
local_rows_start, local_rows_end = A.getOwnershipRange()
local_rows = local_rows_end - local_rows_start
comm = A.getComm()
rows = PETSc.IS().createStride(
local_rows, first=local_rows_start, step=1, comm=comm
)
_, k = A.getSize()  # Get the number of columns (k) from A's size
cols = PETSc.IS().createStride(k, first=0, step=1, comm=comm)

# Getting the local submatrix
# TODO: To be replaced by MatMPIAIJGetLocalMat() in the future (see 
petsc-users mailing list). There is a missing petsc4py binding, need to add it 
myself (and please create a merge request)
A_local = A.createSubMatrices(rows, cols)[0]
return A_local


def create_petsc_matrix_seq(input_array):
"""Building a sequential PETSc matrix from an array

Args:
input_array (np array): Input array

Returns:
seq mat: PETSc matrix
"""
assert len(input_array.shape) == 2

m, n = input_array.shape
matrix = PETSc.Mat().createAIJ(size=(m, n), comm=COMM_SELF)
matrix.setUp()

matrix.setValues(range(m), range(n), input_array, addv=False)

# Assembly the matrix to compute the final structure
matrix.assemblyBegin()
matrix.assemblyEnd()

return matrix


def multiply_matrices_seq(A_seq, B_seq):
"""Multiply 2 sequential matrices

Args:
A_seq (seqaij): local submatrix of A
B_seq (seqaij): sequential matrix B

Returns:
seq mat: PETSc matrix that is the product of A_seq and B_seq
"""
_, A_seq_cols = A_seq.getSize()
B_seq_rows, _ = B_seq.getSize()
assert (
A_seq_cols == B_seq_rows
), f"Incompatible matrix sizes for multiplication: {A_seq_cols} != 
{B_seq_rows}"
C_local = A_seq.matMult(B_seq)
return C_local


def concatenate_local_to_global_matrix(local_matrix, mat_type=None):
"""Create the global matrix C from the local submatrix local_matrix

Args:
local_matrix (seqaij): local submatrix of global_matrix
partition_like (mpiaij): partitioned PETSc matrix
mat_type (str): type of the global matrix. Defaults to None. If None, 
the type of local_matrix is used.

Returns:
mpi PETSc mat: partitioned PETSc matrix
"""
local_matrix_rows, local_matrix_cols = local_matrix.getSize()
global_rows = COMM_WORLD.allreduce(local_matrix_rows, op=MPI.SUM)

# Determine the local portion of the vector
size = ((None, global_rows), (local_matrix_cols, local_matrix_cols))

if mat_type is None:
mat_type = local_matrix.getType()

if "dense" in mat_type:
   

Re: [petsc-users] Orthogonalization of a (sparse) PETSc matrix

2023-08-29 Thread Thanasis Boutsikakis
Thanks Jose, 

This works indeed. However, I was under the impression that this conversion 
might be very costly for big matrices with low sparsity and it would scale with 
the number of non-zero values.

Do you have any idea of the efficiency of this operation?

Thanks

> On 29 Aug 2023, at 19:13, Jose E. Roman  wrote:
> 
> The result of bv.orthogonalize() is most probably a dense matrix, and the 
> result replaces the input matrix, that's why the input matrix is required to 
> be dense.
> 
> You can simply do this:
> 
>  bv = SLEPc.BV().createFromMat(A.convert('dense'))
> 
> Jose
> 
>> El 29 ago 2023, a las 18:50, Thanasis Boutsikakis 
>>  escribió:
>> 
>> Hi all, I have the following code that orthogonalizes a PETSc matrix. The 
>> problem is that this implementation requires that the PETSc matrix is dense, 
>> otherwise, it fails at bv.SetFromOptions(). Hence the assert in 
>> orthogonality().
>> 
>> What could I do in order to be able to orthogonalize sparse matrices as 
>> well? Could I convert it efficiently? (I tried to no avail)
>> 
>> Thanks!
>> 
>> """Experimenting with matrix orthogonalization"""
>> 
>> import contextlib
>> import sys
>> import time
>> import numpy as np
>> from firedrake import COMM_WORLD
>> from firedrake.petsc import PETSc
>> 
>> import slepc4py
>> 
>> slepc4py.init(sys.argv)
>> from slepc4py import SLEPc
>> 
>> from numpy.testing import assert_array_almost_equal
>> 
>> EPSILON_USER = 1e-4
>> EPS = sys.float_info.epsilon
>> 
>> 
>> def Print(message: str):
>>"""Print function that prints only on rank 0 with color
>> 
>>Args:
>>message (str): message to be printed
>>"""
>>PETSc.Sys.Print(message)
>> 
>> 
>> def create_petsc_matrix(input_array, sparse=True):
>>"""Create a PETSc matrix from an input_array
>> 
>>Args:
>>input_array (np array): Input array
>>partition_like (PETSc mat, optional): Petsc matrix. Defaults to None.
>>sparse (bool, optional): Toggle for sparese or dense. Defaults to 
>> True.
>> 
>>Returns:
>>PETSc mat: PETSc matrix
>>"""
>># Check if input_array is 1D and reshape if necessary
>>assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
>>global_rows, global_cols = input_array.shape
>> 
>>size = ((None, global_rows), (global_cols, global_cols))
>> 
>># Create a sparse or dense matrix based on the 'sparse' argument
>>if sparse:
>>matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
>>else:
>>matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
>>matrix.setUp()
>> 
>>local_rows_start, local_rows_end = matrix.getOwnershipRange()
>> 
>>for counter, i in enumerate(range(local_rows_start, local_rows_end)):
>># Calculate the correct row in the array for the current process
>>row_in_array = counter + local_rows_start
>>matrix.setValues(
>>i, range(global_cols), input_array[row_in_array, :], addv=False
>>)
>> 
>># Assembly the matrix to compute the final structure
>>matrix.assemblyBegin()
>>matrix.assemblyEnd()
>> 
>>return matrix
>> 
>> 
>> def orthogonality(A):  # sourcery skip: avoid-builtin-shadow
>>"""Checking and correcting orthogonality
>> 
>>Args:
>>A (PETSc.Mat): Matrix of size [m x k].
>> 
>>Returns:
>>PETSc.Mat: Matrix of size [m x k].
>>"""
>># Check if the matrix is dense
>>mat_type = A.getType()
>>assert mat_type in (
>>"seqdense",
>>"mpidense",
>>), "A must be a dense matrix. SLEPc.BV().createFromMat() requires a dense 
>> matrix."
>> 
>>m, k = A.getSize()
>> 
>>Phi1 = A.getColumnVector(0)
>>Phi2 = A.getColumnVector(k - 1)
>> 
>># Compute dot product using PETSc function
>>dot_product = Phi1.dot(Phi2)
>> 
>>if abs(dot_product) > min(EPSILON_USER, EPS * m):
>>Print("Matrix is not orthogonal")
>> 
>># Type can be CHOL, GS, mro(), SVQB, TSQR, TSQRCHOL
>>_type = SLEPc.BV().OrthogBlockType.GS
>> 
>>bv = SLEPc.BV().createFromMat(A)
>

[petsc-users] Orthogonalization of a (sparse) PETSc matrix

2023-08-29 Thread Thanasis Boutsikakis
Hi all, I have the following code that orthogonalizes a PETSc matrix. The 
problem is that this implementation requires that the PETSc matrix is dense, 
otherwise, it fails at bv.SetFromOptions(). Hence the assert in orthogonality().

What could I do in order to be able to orthogonalize sparse matrices as well? 
Could I convert it efficiently? (I tried to no avail)

Thanks!

"""Experimenting with matrix orthogonalization"""

import contextlib
import sys
import time
import numpy as np
from firedrake import COMM_WORLD
from firedrake.petsc import PETSc

import slepc4py

slepc4py.init(sys.argv)
from slepc4py import SLEPc

from numpy.testing import assert_array_almost_equal

EPSILON_USER = 1e-4
EPS = sys.float_info.epsilon


def Print(message: str):
"""Print function that prints only on rank 0 with color

Args:
message (str): message to be printed
"""
PETSc.Sys.Print(message)


def create_petsc_matrix(input_array, sparse=True):
"""Create a PETSc matrix from an input_array

Args:
input_array (np array): Input array
partition_like (PETSc mat, optional): Petsc matrix. Defaults to None.
sparse (bool, optional): Toggle for sparese or dense. Defaults to True.

Returns:
PETSc mat: PETSc matrix
"""
# Check if input_array is 1D and reshape if necessary
assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
global_rows, global_cols = input_array.shape

size = ((None, global_rows), (global_cols, global_cols))

# Create a sparse or dense matrix based on the 'sparse' argument
if sparse:
matrix = PETSc.Mat().createAIJ(size=size, comm=COMM_WORLD)
else:
matrix = PETSc.Mat().createDense(size=size, comm=COMM_WORLD)
matrix.setUp()

local_rows_start, local_rows_end = matrix.getOwnershipRange()

for counter, i in enumerate(range(local_rows_start, local_rows_end)):
# Calculate the correct row in the array for the current process
row_in_array = counter + local_rows_start
matrix.setValues(
i, range(global_cols), input_array[row_in_array, :], addv=False
)

# Assembly the matrix to compute the final structure
matrix.assemblyBegin()
matrix.assemblyEnd()

return matrix


def orthogonality(A):  # sourcery skip: avoid-builtin-shadow
"""Checking and correcting orthogonality

Args:
A (PETSc.Mat): Matrix of size [m x k].

Returns:
PETSc.Mat: Matrix of size [m x k].
"""
# Check if the matrix is dense
mat_type = A.getType()
assert mat_type in (
"seqdense",
"mpidense",
), "A must be a dense matrix. SLEPc.BV().createFromMat() requires a dense 
matrix."

m, k = A.getSize()

Phi1 = A.getColumnVector(0)
Phi2 = A.getColumnVector(k - 1)

# Compute dot product using PETSc function
dot_product = Phi1.dot(Phi2)

if abs(dot_product) > min(EPSILON_USER, EPS * m):
Print("Matrix is not orthogonal")

# Type can be CHOL, GS, mro(), SVQB, TSQR, TSQRCHOL
_type = SLEPc.BV().OrthogBlockType.GS

bv = SLEPc.BV().createFromMat(A)
bv.setFromOptions()
bv.setOrthogonalization(_type)
bv.orthogonalize()

A = bv.createMat()

Print("Matrix successfully orthogonalized")

# # Assembly the matrix to compute the final structure
if not A.assembled:
A.assemblyBegin()
A.assemblyEnd()
else:
Print("Matrix is orthogonal")

return A


# 
# EXP: Orthogonalization of an mpi PETSc matrix
# 

m, k = 11, 7
# Generate the random numpy matrices
np.random.seed(0)  # sets the seed to 0
A_np = np.random.randint(low=0, high=6, size=(m, k))

A = create_petsc_matrix(A_np, sparse=False)

A_orthogonal = orthogonality(A)

# 
# TEST: Orthogonalization of a numpy matrix
# 
# Generate A_np_orthogonal
A_np_orthogonal, _ = np.linalg.qr(A_np)

# Get the local values from A_orthogonal
local_rows_start, local_rows_end = A_orthogonal.getOwnershipRange()
A_orthogonal_local = A_orthogonal.getValues(
range(local_rows_start, local_rows_end), range(k)
)

# Assert the correctness of the multiplication for the local subset
assert_array_almost_equal(
np.abs(A_orthogonal_local),
np.abs(A_np_orthogonal[local_rows_start:local_rows_end, :]),
decimal=5,
)

Re: [petsc-users] Multiplication of partitioned with non-partitioned (sparse) PETSc matrices

2023-08-24 Thread Thanasis Boutsikakis
 C_local

Args:
C_local (seqaij): local submatrix of C
A (mpi PETSc mat): PETSc matrix A

Returns:
mpi PETSc mat: partitioned PETSc matrix C
"""
C_local_rows, C_local_cols = C_local.getSize()
local_rows_start, _ = A.getOwnershipRange()
m, _ = A.getSize()
C = PETSc.Mat().createAIJ(
size=((None, m), (C_local_cols, C_local_cols)), comm=COMM_WORLD
)
C.setUp()
for i in range(C_local_rows):
cols, values = C_local.getRow(i)
global_row = i + local_rows_start
C.setValues(global_row, cols, values)
C.assemblyBegin()
C.assemblyEnd()
return C


m, k = 11, 7
# Generate the random numpy matrices
np.random.seed(0)  # sets the seed to 0
A_np = np.random.randint(low=0, high=6, size=(m, k))
B_np = np.random.randint(low=0, high=6, size=(k, k))

# Create B as a sequential matrix on each process
B_seq = create_petsc_matrix_seq(B_np)
print_mat_info(B_seq, "B")

A = create_petsc_matrix(A_np)
print_mat_info(A, "A")

# Getting the correct local submatrix to be multiplied by B_seq
A_local = get_local_submatrix(A)

# Multiplication of 2 sequential matrices
C_local = multiply_sequential_matrices(A_local, B_seq)

# Creating the global C matrix
C = create_global_matrix(C_local, A)
print_mat_info(C, "C")

# 
# TEST: Multiplication of 2 numpy matrices
# 
AB_np = np.dot(A_np, B_np)
Print(f"MATRIX AB_np [{AB_np.shape[0]}x{AB_np.shape[1]}]")
Print(AB_np)

# Get the local values from C
local_rows_start, local_rows_end = C.getOwnershipRange()
C_local = C.getValues(range(local_rows_start, local_rows_end), range(k))

# Assert the correctness of the multiplication for the local subset
assert_array_almost_equal(C_local, AB_np[local_rows_start:local_rows_end, :], 
decimal=5)




> On 23 Aug 2023, at 14:40, Pierre Jolivet  wrote:
> 
> 
> 
>> On 23 Aug 2023, at 8:59 PM, Thanasis Boutsikakis 
>>  wrote:
>> 
>> Thanks for the clarification Mark.
>> 
>> I have tried such an implementation but I since I could not find the 
>> equivalent of https://petsc.org/main/manualpages/Mat/MatMPIAIJGetLocalMat/ 
>> for petsc4py, I used A.getLocalSubMatrix to do so, which returns a 
>> ‘localref’ object that I cannot then use to get my local 'seqaij’ matrix.
> 
> You really need MatMPIAIJGetLocalMat().
> It seems there is a missing petsc4py binding, either add it yourself (and 
> please create a merge request), or post an issue on GitLab and hope that 
> someone adds it.
> Another way to bypass the issue would be to call MatCreateSubMatrices() with 
> an is_row set to the local rows, and an is_col set to all columns, but this 
> will be less scalable than MatMPIAIJGetLocalMat().
> 
> Thanks,
> Pierre
> 
>> I also tried A.getDenseLocalMatrix() but it seems not to be suitable for my 
>> case. I couldn’t find an example in the petsc4py source code or something 
>> relevant, do you have any ideas?
>> 
>> """Experimenting with petsc mat-mat multiplication"""
>> # import pdb
>> 
>> import numpy as np
>> from firedrake import COMM_WORLD
>> from firedrake.petsc import PETSc
>> from numpy.testing import assert_array_almost_equal
>> import pdb
>> 
>> nproc = COMM_WORLD.size
>> rank = COMM_WORLD.rank
>> 
>> 
>> def Print(x: str):
>> """Prints the string only on the root process
>> 
>> Args:
>> x (str): String to be printed
>> """
>> PETSc.Sys.Print(x)
>> 
>> 
>> def create_petsc_matrix_seq(input_array):
>> """Building a sequential petsc matrix from an array
>> 
>> Args:
>> input_array (np array): Input array
>> 
>> Returns:
>> seq mat: PETSc matrix
>> """
>> assert len(input_array.shape) == 2
>> 
>> m, n = input_array.shape
>> matrix = PETSc.Mat().createAIJ(size=(m, n), comm=PETSc.COMM_SELF)
>> matrix.setUp()
>> 
>> matrix.setValues(range(m), range(n), input_array, addv=False)
>> 
>> # Assembly the matrix to compute the final structure
>> matrix.assemblyBegin()
>> matrix.assemblyEnd()
>> 
>> return matrix
>> 
>> 
>> def create_petsc_matrix(input_array, partition_like=None):
>> """Create a PETSc matrix from an input_array
>> 
>> Args:
>> input_array (np array): Input array
>> partition_like (petsc mat, optional): Petsc matrix. Defaults to None.
>> sparse (bool, optional): Toggl

Re: [petsc-users] Multiplication of partitioned with non-partitioned (sparse) PETSc matrices

2023-08-23 Thread Thanasis Boutsikakis
bly the matrix to compute the final structure
local_A.assemblyBegin()
local_A.assemblyEnd()

Print(local_A.getType())
Print(local_A.getSizes())

# pdb.set_trace()

# Multiply the two matrices
local_C = local_A.matMult(B_seq)


> On 23 Aug 2023, at 12:56, Mark Adams  wrote:
> 
> 
> 
> On Wed, Aug 23, 2023 at 5:36 AM Thanasis Boutsikakis 
>  <mailto:thanasis.boutsika...@corintis.com>> wrote:
>> Thanks for the suggestion Pierre.
>> 
>> Yes B is duplicated by all processes.
>> 
>> In this case, should B be created as a sequential sparse matrix using 
>> COMM_SELF?
> 
> Yes, that is what Pierre said,
> 
> Mark
>  
>> I guess if not, the multiplication of B with the output of 
>> https://petsc.org/main/manualpages/Mat/MatMPIAIJGetLocalMat/ would not go 
>> through, right? 
>> 
>> Thanks,
>> Thanos
>>  
>>> On 23 Aug 2023, at 10:47, Pierre Jolivet >> <mailto:pierre.joli...@lip6.fr>> wrote:
>>> 
>>> 
>>> 
>>>> On 23 Aug 2023, at 5:35 PM, Thanasis Boutsikakis 
>>>> >>> <mailto:thanasis.boutsika...@corintis.com>> wrote:
>>>> 
>>>> Hi all,
>>>> 
>>>> I am trying to multiply two Petsc matrices as C = A * B, where A is a tall 
>>>> matrix and B is a relatively small matrix.
>>>> 
>>>> I have taken the decision to create A as (row-)partitioned matrix and B as 
>>>> a non-partitioned matrix that it is entirely shared by all procs (to avoid 
>>>> unnecessary communication).
>>>> 
>>>> Here is my code:
>>>> 
>>>> import numpy as np
>>>> from firedrake import COMM_WORLD
>>>> from firedrake.petsc import PETSc
>>>> from numpy.testing import assert_array_almost_equal
>>>> 
>>>> nproc = COMM_WORLD.size
>>>> rank = COMM_WORLD.rank
>>>> 
>>>> def create_petsc_matrix_non_partitioned(input_array):
>>>> """Building a mpi non-partitioned petsc matrix from an array
>>>> 
>>>> Args:
>>>> input_array (np array): Input array
>>>> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
>>>> True.
>>>> 
>>>> Returns:
>>>> mpi mat: PETSc matrix
>>>> """
>>>> assert len(input_array.shape) == 2
>>>> 
>>>> m, n = input_array.shape
>>>> 
>>>> matrix = PETSc.Mat().createAIJ(size=((m, n), (m, n)), comm=COMM_WORLD)
>>>> 
>>>> # Set the values of the matrix
>>>> matrix.setValues(range(m), range(n), input_array[:, :], addv=False)
>>>> 
>>>> # Assembly the matrix to compute the final structure
>>>> matrix.assemblyBegin()
>>>> matrix.assemblyEnd()
>>>> 
>>>> return matrix
>>>> 
>>>> 
>>>> def create_petsc_matrix(input_array, partition_like=None):
>>>> """Create a PETSc matrix from an input_array
>>>> 
>>>> Args:
>>>> input_array (np array): Input array
>>>> partition_like (petsc mat, optional): Petsc matrix. Defaults to 
>>>> None.
>>>> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
>>>> True.
>>>> 
>>>> Returns:
>>>> petsc mat: PETSc matrix
>>>> """
>>>> # Check if input_array is 1D and reshape if necessary
>>>> assert len(input_array.shape) == 2, "Input array should be 
>>>> 2-dimensional"
>>>> global_rows, global_cols = input_array.shape
>>>> 
>>>> comm = COMM_WORLD
>>>> if partition_like is not None:
>>>> local_rows_start, local_rows_end = 
>>>> partition_like.getOwnershipRange()
>>>> local_rows = local_rows_end - local_rows_start
>>>> 
>>>> # No parallelization in the columns, set local_cols = None to 
>>>> parallelize
>>>> size = ((local_rows, global_rows), (global_cols, global_cols))
>>>> else:
>>>> size = ((None, global_rows), (global_cols, global_cols))
>>>> 
>>>> matrix = PETSc.Mat().createAIJ(size=size, comm=comm)
>>>> matrix.setUp()
>>>> 
>>>> local_rows_start, local_rows_end

Re: [petsc-users] Multiplication of partitioned with non-partitioned (sparse) PETSc matrices

2023-08-23 Thread Thanasis Boutsikakis
Thanks for the suggestion Pierre.

Yes B is duplicated by all processes.

In this case, should B be created as a sequential sparse matrix using 
COMM_SELF? I guess if not, the multiplication of B with the output of 
https://petsc.org/main/manualpages/Mat/MatMPIAIJGetLocalMat/ would not go 
through, right? 

Thanks,
Thanos
 
> On 23 Aug 2023, at 10:47, Pierre Jolivet  wrote:
> 
> 
> 
>> On 23 Aug 2023, at 5:35 PM, Thanasis Boutsikakis 
>>  wrote:
>> 
>> Hi all,
>> 
>> I am trying to multiply two Petsc matrices as C = A * B, where A is a tall 
>> matrix and B is a relatively small matrix.
>> 
>> I have taken the decision to create A as (row-)partitioned matrix and B as a 
>> non-partitioned matrix that it is entirely shared by all procs (to avoid 
>> unnecessary communication).
>> 
>> Here is my code:
>> 
>> import numpy as np
>> from firedrake import COMM_WORLD
>> from firedrake.petsc import PETSc
>> from numpy.testing import assert_array_almost_equal
>> 
>> nproc = COMM_WORLD.size
>> rank = COMM_WORLD.rank
>> 
>> def create_petsc_matrix_non_partitioned(input_array):
>> """Building a mpi non-partitioned petsc matrix from an array
>> 
>> Args:
>> input_array (np array): Input array
>> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
>> True.
>> 
>> Returns:
>> mpi mat: PETSc matrix
>> """
>> assert len(input_array.shape) == 2
>> 
>> m, n = input_array.shape
>> 
>> matrix = PETSc.Mat().createAIJ(size=((m, n), (m, n)), comm=COMM_WORLD)
>> 
>> # Set the values of the matrix
>> matrix.setValues(range(m), range(n), input_array[:, :], addv=False)
>> 
>> # Assembly the matrix to compute the final structure
>> matrix.assemblyBegin()
>> matrix.assemblyEnd()
>> 
>> return matrix
>> 
>> 
>> def create_petsc_matrix(input_array, partition_like=None):
>> """Create a PETSc matrix from an input_array
>> 
>> Args:
>> input_array (np array): Input array
>> partition_like (petsc mat, optional): Petsc matrix. Defaults to None.
>> sparse (bool, optional): Toggle for sparese or dense. Defaults to 
>> True.
>> 
>> Returns:
>> petsc mat: PETSc matrix
>> """
>> # Check if input_array is 1D and reshape if necessary
>> assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
>> global_rows, global_cols = input_array.shape
>> 
>> comm = COMM_WORLD
>> if partition_like is not None:
>> local_rows_start, local_rows_end = partition_like.getOwnershipRange()
>> local_rows = local_rows_end - local_rows_start
>> 
>> # No parallelization in the columns, set local_cols = None to 
>> parallelize
>> size = ((local_rows, global_rows), (global_cols, global_cols))
>> else:
>> size = ((None, global_rows), (global_cols, global_cols))
>> 
>> matrix = PETSc.Mat().createAIJ(size=size, comm=comm)
>> matrix.setUp()
>> 
>> local_rows_start, local_rows_end = matrix.getOwnershipRange()
>> 
>> for counter, i in enumerate(range(local_rows_start, local_rows_end)):
>> # Calculate the correct row in the array for the current process
>> row_in_array = counter + local_rows_start
>> matrix.setValues(
>> i, range(global_cols), input_array[row_in_array, :], addv=False
>> )
>> 
>> # Assembly the matrix to compute the final structure
>> matrix.assemblyBegin()
>> matrix.assemblyEnd()
>> 
>> return matrix
>> 
>> 
>> m, k = 10, 3
>> # Generate the random numpy matrices
>> np.random.seed(0)  # sets the seed to 0
>> A_np = np.random.randint(low=0, high=6, size=(m, k))
>> B_np = np.random.randint(low=0, high=6, size=(k, k))
>> 
>> 
>> A = create_petsc_matrix(A_np)
>> 
>> B = create_petsc_matrix_non_partitioned(B_np)
>> 
>> # Now perform the multiplication
>> C = A * B
>> 
>> The problem with this is that there is a mismatch between the local rows of 
>> A (depend on the partitioning) and the global rows of B (3 for all procs), 
>> so that the multiplication cannot happen in parallel. Here is the error:
>> 
>> [0]PETSC ERROR: 
>> 
>> [0]PETSC E

[petsc-users] Multiplication of partitioned with non-partitioned (sparse) PETSc matrices

2023-08-23 Thread Thanasis Boutsikakis
Hi all,

I am trying to multiply two Petsc matrices as C = A * B, where A is a tall 
matrix and B is a relatively small matrix.

I have taken the decision to create A as (row-)partitioned matrix and B as a 
non-partitioned matrix that it is entirely shared by all procs (to avoid 
unnecessary communication).

Here is my code:

import numpy as np
from firedrake import COMM_WORLD
from firedrake.petsc import PETSc
from numpy.testing import assert_array_almost_equal

nproc = COMM_WORLD.size
rank = COMM_WORLD.rank

def create_petsc_matrix_non_partitioned(input_array):
"""Building a mpi non-partitioned petsc matrix from an array

Args:
input_array (np array): Input array
sparse (bool, optional): Toggle for sparese or dense. Defaults to True.

Returns:
mpi mat: PETSc matrix
"""
assert len(input_array.shape) == 2

m, n = input_array.shape

matrix = PETSc.Mat().createAIJ(size=((m, n), (m, n)), comm=COMM_WORLD)

# Set the values of the matrix
matrix.setValues(range(m), range(n), input_array[:, :], addv=False)

# Assembly the matrix to compute the final structure
matrix.assemblyBegin()
matrix.assemblyEnd()

return matrix


def create_petsc_matrix(input_array, partition_like=None):
"""Create a PETSc matrix from an input_array

Args:
input_array (np array): Input array
partition_like (petsc mat, optional): Petsc matrix. Defaults to None.
sparse (bool, optional): Toggle for sparese or dense. Defaults to True.

Returns:
petsc mat: PETSc matrix
"""
# Check if input_array is 1D and reshape if necessary
assert len(input_array.shape) == 2, "Input array should be 2-dimensional"
global_rows, global_cols = input_array.shape

comm = COMM_WORLD
if partition_like is not None:
local_rows_start, local_rows_end = partition_like.getOwnershipRange()
local_rows = local_rows_end - local_rows_start

# No parallelization in the columns, set local_cols = None to 
parallelize
size = ((local_rows, global_rows), (global_cols, global_cols))
else:
size = ((None, global_rows), (global_cols, global_cols))

matrix = PETSc.Mat().createAIJ(size=size, comm=comm)
matrix.setUp()

local_rows_start, local_rows_end = matrix.getOwnershipRange()

for counter, i in enumerate(range(local_rows_start, local_rows_end)):
# Calculate the correct row in the array for the current process
row_in_array = counter + local_rows_start
matrix.setValues(
i, range(global_cols), input_array[row_in_array, :], addv=False
)

# Assembly the matrix to compute the final structure
matrix.assemblyBegin()
matrix.assemblyEnd()

return matrix


m, k = 10, 3
# Generate the random numpy matrices
np.random.seed(0)  # sets the seed to 0
A_np = np.random.randint(low=0, high=6, size=(m, k))
B_np = np.random.randint(low=0, high=6, size=(k, k))


A = create_petsc_matrix(A_np)

B = create_petsc_matrix_non_partitioned(B_np)

# Now perform the multiplication
C = A * B

The problem with this is that there is a mismatch between the local rows of A 
(depend on the partitioning) and the global rows of B (3 for all procs), so 
that the multiplication cannot happen in parallel. Here is the error:

[0]PETSC ERROR: 

[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably 
memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and 
https://petsc.org/release/faq/
[0]PETSC ERROR: configure using --with-debugging=yes, recompile, link, and run
[0]PETSC ERROR: to get more information on the crash.
[0]PETSC ERROR: Run with -malloc_debug to check if memory corruption is causing 
the crash.
application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=59
:
system msg for write_line failure : Bad file descriptor


Is there a standard way to achieve this?

Thanks,
Thanos