> But that's a different error. Have you tried to look up what error code 60
> means in SLEPc?
Here is the entire error. I was interpreting the error was the “Nonconforming
object size”, which is the error I was reporting. I will research the error 60
in Slepc, thank you!
[0]PETSC ERROR:
On 1/8/21 3:58 PM, Zachary 42! wrote:
Yes, the error comes when I try to use the Slepc solver (the matrix builds
without errors). Running the attached file (Sorry, I’v been experimenting
trying to get things working so I think I should include what I am testing
now), At the bottom of the
No, that's not what you want. You need to partition the columns in the same way as you partition the vectors you want to multiply the matrix by. We generally partition the vectors in the same way as the rows of the matrix, and so then you also have to choose the same partitioning for the columns
You said “the local number of columns also needs to add up to the global
number of columns” but my desired layout is local_num_columns ==
global_num_columns.
No, that's not what you want. You need to partition the columns in the same
way as you partition the vectors you want to multiply
Hi W.,
Apologies for not asking a specific question. Your help with the SparseMatrix
constructor still left me confused and I was hoping for clarity. ( To answer
your question, the code ran without error but the error came back once I tried
to use Slepc. )
You said “the local number of
On 1/6/21 8:07 AM, Zachary 42! wrote:
Here is my code again with hopefully better comments. Looking at the loop
structure for the dynamic_sparsity_pattern should make it clear. I think the
3rd argument in the constructor for the dynamic sparsity pattern is wrong if I
want these dimensions.
Sorry, I am not sure why my code’s format changed when I copied and pasted,
trying again...
#include
#include
#include
#include
#include
using namespace dealii;
class OneBodyHamiltonianOperator
{
public:
/**
* Declare type for container size.
*/
using size_type =
W.,
Ah yes you are correct in your thinking. My test case was every process did
have the same number of rows but I would need to communicate that with a gather
to account for the number of rows not divisible by the number of given
processors. I will use the 4th reinit function to avoid
On 1/5/21 11:32 AM, Zachary Streeter wrote:
Yes, I want to use the constructor with the dynamic sparsity pattern. So with
your suggestion in mind, would that just be the following:
dealii::IndexSet local_owned(a_local_row_set.size());
local_owned.add_range(*a_local_row_set.begin(),
W.,
Ah okay I see, I will try that in my program and let you know.
Yes, I want to use the constructor with the dynamic sparsity pattern. So
with your suggestion in mind, would that just be the following:
dealii::IndexSet local_owned(a_local_row_set.size());
On 1/5/21 8:21 AM, Zachary Streeter wrote:
Let me know if this is okay. This compiled, ran, and produced the same error
on my end.
Yes, that's the sort of testcase that makes it easy to debug :-)
In this call,
m_H1.reinit(MPI_COMM_WORLD,
a_local_row_set.size(),
#include
#include
#include
#include
#include
using namespace dealii;
class OneBodyHamiltonianOperator
{
public:
/**
* Declare type for container size.
*/
using size_type = dealii::types::global_dof_index;
OneBodyHamiltonianOperator(const dealii::IndexSet _local_row_set,
My project is in quantum scattering and I would like to have some operators be
distributed PETSc objects. So inside my OneBodyHamiltonianOperator class (for
example), I would like to create a PETScWrappers::MPI::SparseMatrix and then
use SLEPC to solve for the ground state and excited
My project is in quantum scattering and I would like to have some operators
be distributed PETSc objects. So inside my OneBodyHamiltonianOperator
class (for example), I would like to create a
PETScWrappers::MPI::SparseMatrix and then use SLEPC to solve for the ground
state and excited states.
Zachary,
I am trying to debug this strange behavior. I am trying to build a PETSC
sparse parallel matrix using 4 processors. This gives me 32 local number of
rows (so 128 global number of rows). But when I pass the local_num_of_rows
variable into the reinit function, this is the PETSC
Hi everyone,
I am trying to debug this strange behavior. I am trying to build a PETSC
sparse parallel matrix using 4 processors. This gives me 32 local number of
rows (so 128 global number of rows). But when I pass the local_num_of_rows
variable into the reinit function, this is the PETSC
16 matches
Mail list logo