Hi all,
Let's say I have two DMDAs with identical size but with different dofs.
E.g. da1 with dof=4; da2 with dof=1;
I have global vectors associated with each one of them, say, gv1 and gv2
respectively.
How can I copy/scatter values of one particular field from gv1 to gv2 ?
Looking at the
On Thu, Aug 8, 2013 at 6:32 AM, Bishesh Khanal bishes...@gmail.com wrote:
Hi all,
Let's say I have two DMDAs with identical size but with different dofs.
E.g. da1 with dof=4; da2 with dof=1;
I have global vectors associated with each one of them, say, gv1 and gv2
respectively.
How can I
Hi,
I am working on multi-level grid for Poisson equation. I need to refine
some sub-region in the computational domain. To this, I plan to build some
boxes (patches) based on the coarsest level. I am using DM to manage the data.
I found there is a new function DMPatachCreate() in the
On Thu, Aug 8, 2013 at 1:29 PM, Roc Wang pengxw...@hotmail.com wrote:
Hi,
I am working on multi-level grid for Poisson equation. I need to
refine some sub-region in the computational domain. To this, I plan to
build some boxes (patches) based on the coarsest level. I am using DM to
Thanks Mat,
I tried Chombo for implementing AMR but not tried SAMRAI yet. Chombo can do
AMR, but it seems the data structure is quite complicated for customizing
usage. What I want to do with petsc is to compose a simple home-made like
blocked multi-level grid, though it is not
Hi guys,
I'm running into a bug that has made me question my understanding of
memory layout in VecGhost.
First, I remember reading somewhere before (in the manual or mailing
list which I cannot find now) that the way they are internally
organized is all local values followed by all ghost values.
On Aug 8, 2013, at 3:32 PM, Roc Wang pengxw...@hotmail.com wrote:
Thanks Mat,
I tried Chombo for implementing AMR but not tried SAMRAI yet. Chombo can
do AMR, but it seems the data structure is quite complicated for customizing
usage. What I want to do with petsc is to compose a
On Aug 8, 2013, at 2:56 PM, Mohammad Mirzadeh mirza...@gmail.com wrote:
Hi guys,
I'm running into a bug that has made me question my understanding of
memory layout in VecGhost.
First, I remember reading somewhere before (in the manual or mailing
list which I cannot find now) that the
How big of an application are you looking into? If you are thinking in
the range of couple of 10M grid points on couple of hundred
processors, then I'd say the simplest approach is to create grid in
serial and then use PETSc's interface to ParMetis to handle
partitioning. I did this with my
Awesome. Thanks Barry for the quick response.
On Thu, Aug 8, 2013 at 1:42 PM, Barry Smith bsm...@mcs.anl.gov wrote:
On Aug 8, 2013, at 2:56 PM, Mohammad Mirzadeh mirza...@gmail.com wrote:
Hi guys,
I'm running into a bug that has made me question my understanding of
memory layout in
Dear Dr. Smith,
I sincerely appreciate your valuable answers. My KSP Poisson solver
has been significantly speed up with your help. Here, I wonder what
should I do extra to employ geometric MG for non-uniform Cartesian mesh.
I suppose the DMDA won't automatically generate the coarse grid
On Aug 8, 2013, at 6:16 PM, Alan zhenglun@gmail.com wrote:
Dear Dr. Smith,
I sincerely appreciate your valuable answers. My KSP Poisson solver has
been significantly speed up with your help. Here, I wonder what should I do
extra to employ geometric MG for non-uniform Cartesian mesh.
12 matches
Mail list logo