Hi
I assert this line to the hypre.c to see what block size it set to
/* special case for BoomerAMG */
if (jac->setup == HYPRE_BoomerAMGSetup) {
ierr = MatGetBlockSize(pc->pmat,);CHKERRQ(ierr);
// check block size passed to HYPRE
PetscPrintf(PetscObjectComm((PetscObject)pc),"the
On Tue, Jan 26, 2016 at 3:58 AM, Hoang Giang Bui wrote:
> Hi
>
> I assert this line to the hypre.c to see what block size it set to
>
> /* special case for BoomerAMG */
> if (jac->setup == HYPRE_BoomerAMGSetup) {
> ierr = MatGetBlockSize(pc->pmat,);CHKERRQ(ierr);
>
>
Clear enough. Thank you :-)
Giang
On Tue, Jan 26, 2016 at 3:01 PM, Mark Adams wrote:
>
>
> On Tue, Jan 26, 2016 at 3:58 AM, Hoang Giang Bui
> wrote:
>
>> Hi
>>
>> I assert this line to the hypre.c to see what block size it set to
>>
>> /* special case for
OK, let's come back to my problem. I got your point about the interaction
between components in one block. In my case, the interaction is strong.
As you said, I try this:
ierr = KSPSetFromOptions(ksp); CHKERRQ(ierr);
ierr = PCFieldSplitGetSubKSP(pc, , _ksp); CHKERRQ(ierr);
> On Jan 25, 2016, at 11:13 AM, Hoang Giang Bui wrote:
>
> OK, let's come back to my problem. I got your point about the interaction
> between components in one block. In my case, the interaction is strong.
>
> As you said, I try this:
>
> ierr =
On Fri, Jan 22, 2016 at 9:27 AM, Mark Adams wrote:
>
>>
>> I said the Hypre setup cost is not scalable,
>>
>
> I'd be a little careful here. Scaling for the matrix triple product is
> hard and hypre does put effort into scaling. I don't have any data
> however. Do you?
>
I
>
>
> I used it for PyLith and saw this. I did not think any AMG had scalable
> setup time.
>
>
OK, I am guessing it was scaling poorly, weak scaling, but it was sublinear
after some saturation at the beginning.
I have not done a weak scaling study on matrix setup (RAP primarily) ever,
but I did
On Fri, Jan 22, 2016 at 12:17 PM, Hom Nath Gharti
wrote:
> Thanks Matt for great suggestion. One last question, do you know
> whether the GPU capability of current PETSC version is matured enough
> to try for my problem?
>
The only thing that would really make sense to do
Hi Matt,
SPECFEM currently has only an explicit time scheme and does not have
full gravity implemented. I am adding implicit time scheme and full
gravity so that it can be used for interesting quasistatic problems
such as glacial rebound, post seismic relaxation etc. I am using Petsc
as a linear
On Fri, Jan 22, 2016 at 11:47 AM, Hom Nath Gharti
wrote:
> Thanks a lot.
>
> With AMG it did not converge within the iteration limit of 3000.
>
> In solid: elastic wave equation with added gravity term \rho \nabla\phi
> In fluid: acoustic wave equation with added gravity
On Fri, Jan 22, 2016 at 3:40 AM, Hoang Giang Bui wrote:
> Hi Matt
> I would rather like to set the block size for block P2 too. Why?
>
> Because in one of my test (for problem involves only [u_x u_y u_z]), the
> gmres + Hypre AMG converges in 50 steps with block size 3,
Hi Matt
I would rather like to set the block size for block P2 too. Why?
Because in one of my test (for problem involves only [u_x u_y u_z]), the
gmres + Hypre AMG converges in 50 steps with block size 3, whereby it
increases to 140 if block size is 1 (see attached files).
This gives me the
Thanks for your suggestions! If it's just 2X, I will not waste my time!
Hom Nath
On Fri, Jan 22, 2016 at 5:06 PM, Matthew Knepley wrote:
> On Fri, Jan 22, 2016 at 3:47 PM, Hom Nath Gharti
> wrote:
>>
>> Hi Matt,
>>
>> SPECFEM currently has only an
On Fri, Jan 22, 2016 at 11:10 AM, Hom Nath Gharti
wrote:
> Thanks Matt.
>
> Attached detailed info on ksp of a much smaller test. This is a
> multiphysics problem.
>
You are using FGMRES/ASM(ILU0). From your description below, this sounds
like
an elliptic system. I would at
DO you mean the option pc_fieldsplit_block_size? In this thread:
http://petsc-users.mcs.anl.narkive.com/qSHIOFhh/fieldsplit-error
It assumes you have a constant number of fields at each grid point, am I
right? However, my field split is not constant, like
[u1_x u1_yu1_zp_1u2_x
On Fri, Jan 22, 2016 at 7:27 AM, Hoang Giang Bui wrote:
> DO you mean the option pc_fieldsplit_block_size? In this thread:
>
> http://petsc-users.mcs.anl.narkive.com/qSHIOFhh/fieldsplit-error
>
No. "Block Size" is confusing on PETSc since it is used to do several
things.
Dear all,
I take this opportunity to ask for your important suggestion.
I am solving an elastic-acoustic-gravity equation on the planet. I
have displacement vector (ux,uy,uz) in solid region, displacement
potential (\xi) and pressure (p) in fluid region, and gravitational
potential (\phi) in all
>
>
>
> I said the Hypre setup cost is not scalable,
>
I'd be a little careful here. Scaling for the matrix triple product is
hard and hypre does put effort into scaling. I don't have any data
however. Do you?
> but it can be amortized over the iterations. You can quantify this
> just by
On Mon, Jan 18, 2016 at 8:29 AM, Hoang Giang Bui wrote:
>
>
> On Thu, Jan 14, 2016 at 8:08 PM, Barry Smith wrote:
>
>>
>> > On Jan 14, 2016, at 12:57 PM, Jed Brown wrote:
>> >
>> > Hoang Giang Bui writes:
>> >> One
On Thu, Jan 14, 2016 at 8:08 PM, Barry Smith wrote:
>
> > On Jan 14, 2016, at 12:57 PM, Jed Brown wrote:
> >
> > Hoang Giang Bui writes:
> >> One more question I like to ask, which is more on the performance of the
> >> solver. That if
Why P2/P2 is not for co-located discretization? However, it's not my
question. The P2/P1 which I used generate variable block size at each node.
That was fine if I used PCFieldSplitSetIS for each components,
displacements and pressures. But how to set the block size (3) for
displacement block?
On Mon, Jan 18, 2016 at 9:42 AM, Hoang Giang Bui wrote:
> Why P2/P2 is not for co-located discretization? However, it's not my
> question. The P2/P1 which I used generate variable block size at each node.
> That was fine if I used PCFieldSplitSetIS for each components,
>
Hoang Giang Bui writes:
> Why P2/P2 is not for co-located discretization?
Matt typed "P2/P2" when me meant "P2/P1".
signature.asc
Description: PGP signature
This is a very interesting thread because use of block matrix improves the
performance of AMG a lot. In my case is the elasticity problem.
One more question I like to ask, which is more on the performance of the
solver. That if I have a coupled problem, says the point block is [u_x u_y
u_z p] in
> On Jan 14, 2016, at 5:04 AM, Hoang Giang Bui wrote:
>
> This is a very interesting thread because use of block matrix improves the
> performance of AMG a lot. In my case is the elasticity problem.
>
> One more question I like to ask, which is more on the performance of
Hoang Giang Bui writes:
> One more question I like to ask, which is more on the performance of the
> solver. That if I have a coupled problem, says the point block is [u_x u_y
> u_z p] in which entries of p block in stiffness matrix is in a much smaller
> scale than u (p~1e-6,
> On Jan 14, 2016, at 12:57 PM, Jed Brown wrote:
>
> Hoang Giang Bui writes:
>> One more question I like to ask, which is more on the performance of the
>> solver. That if I have a coupled problem, says the point block is [u_x u_y
>> u_z p] in which
On Thu, Jan 14, 2016 at 5:04 AM, Hoang Giang Bui wrote:
> This is a very interesting thread because use of block matrix improves the
> performance of AMG a lot. In my case is the elasticity problem.
>
> One more question I like to ask, which is more on the performance of the
Okay that makes sense, thanks
On Wed, Jan 13, 2016 at 10:12 PM, Barry Smith wrote:
>
> > On Jan 13, 2016, at 10:24 PM, Justin Chang wrote:
> >
> > Thanks Barry,
> >
> > 1) So for block matrices, the ja array is smaller. But what's the
> "hardware"
> On Jan 13, 2016, at 10:24 PM, Justin Chang wrote:
>
> Thanks Barry,
>
> 1) So for block matrices, the ja array is smaller. But what's the "hardware"
> explanation for this performance improvement? Does it have to do with spatial
> locality where you are more likely to
Thanks Barry,
1) So for block matrices, the ja array is smaller. But what's the
"hardware" explanation for this performance improvement? Does it have to do
with spatial locality where you are more likely to reuse data in that ja
array, or does it have to do with the fact that loading/storing
Hi all,
1) I am guessing MATMPIBAIJ could theoretically have better performance
than simply using MATMPIAIJ. Why is that? Is it similar to the reasoning
that block (dense) matrix-vector multiply is "faster" than simple
matrix-vector?
2) I am looking through the manual and online documentation
> On Jan 13, 2016, at 9:57 PM, Justin Chang wrote:
>
> Hi all,
>
> 1) I am guessing MATMPIBAIJ could theoretically have better performance than
> simply using MATMPIAIJ. Why is that? Is it similar to the reasoning that
> block (dense) matrix-vector multiply is "faster"
33 matches
Mail list logo