# Re: [petsc-users] Transform scipy sparse to partioned, parallel petsc matrix in PETSc4py

```On 11 February 2018 at 20:35, Jan Grießer <griesser....@googlemail.com> wrote:
> Hey,
>
> i have a precomputed scipy sparse matrix for which I want to solve the
> eigenvalue problem for a matrix of size 35000x35000. I don´t really get how
> to parallelize this problem correctly.
> Similar to another
> I tried the following code:
>
>
>
>
> B = D.tocsr()
>
>
>
> # Construct the matrix Ds in parallel
>
> Ds = PETSc.Mat().create()
>
> Ds.setSizes(CSRmatrix.shape)
>
> Ds.assemble()
>```
```
Use DS.setUp()

>
>
> # Fill the matrix
>
> rstart, rend = Ds.getOwnershipRange()
>
> csr = (
>
>     B.indptr[rstart:rend+1] - B.indptr[rstart],
>
>     B.indices[B.indptr[rstart]:B.indptr[rend]],
>
>     B.data[B.indptr[rstart]:B.indptr[rend]]
>
> )
>

This looks just fine

>
>
> Ds = PETSc.Mat().createAIJ(size=CSRmatrix.shape, csr=csr)
>
> Ds.assemble()
>

I think you don't need to assemble here.

>
>
> # Solve the eigenvalue problem
>
> solve_eigensystem(Ds)
>
>
>
> This code works for 1 processor with mpiexec –n 1 python example.py, however
> for increasing number of processors it appears as if al processors try to
> solve the overall problem instead of splitting it into blocks and solve for
> a subset of eigenvalues and eigenvectors.
> Why is this the case or did I miss something?
>

I guess you are using `mpiexec` from a different MPI implementation
than the one you used to build PETSc and petsc4py.

--
Lisandro Dalcin
============
Research Scientist
Computer, Electrical and Mathematical Sciences & Engineering (CEMSE)
Extreme Computing Research Center (ECRC)
King Abdullah University of Science and Technology (KAUST)
http://ecrc.kaust.edu.sa/

4700 King Abdullah University of Science and Technology
al-Khawarizmi Bldg (Bldg 1), Office # 0109
Thuwal 23955-6900, Kingdom of Saudi Arabia
http://www.kaust.edu.sa

Office Phone: +966 12 808-0459
```