Amazing, right preconditioning fixes the problem. Thanks a lot!
On Tue, Oct 16, 2018 at 8:31 PM Dave May wrote:
>
>
> On Wed, 17 Oct 2018 at 03:15, Weizhuo Wang wrote:
>
>> I just tried both, neither of them make a difference. I got exactly the
>> same curve with either combination.
>>
>
>
Use -ksp_view to confirm the options are actually set.
Fande
Sent from my iPhone
> On Oct 16, 2018, at 7:40 PM, Ellen M. Price
> wrote:
>
> Maybe a stupid suggestion, but sometimes I forget to call the
> *SetFromOptions function on my object, and then get confused when
> changing the
Maybe a stupid suggestion, but sometimes I forget to call the
*SetFromOptions function on my object, and then get confused when
changing the options has no effect. Just a thought from a fellow grad
student.
Ellen
On 10/16/2018 09:36 PM, Matthew Knepley wrote:
> On Tue, Oct 16, 2018 at 9:14 PM
On Tue, Oct 16, 2018 at 9:14 PM Weizhuo Wang wrote:
> I just tried both, neither of them make a difference. I got exactly the
> same curve with either combination.
>
I have a hard time believing you. If you make the residual tolerance much
finer, your error will definitely change.
I run tests
On Wed, 17 Oct 2018 at 03:15, Weizhuo Wang wrote:
> I just tried both, neither of them make a difference. I got exactly the
> same curve with either combination.
>
Try using right preconditioning.
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSetPCSide.html
Use the
I just tried both, neither of them make a difference. I got exactly the
same curve with either combination.
Thanks!
Wang weizhuo
On Tue, Oct 16, 2018 at 8:06 PM Matthew Knepley wrote:
> On Tue, Oct 16, 2018 at 7:26 PM Weizhuo Wang
> wrote:
>
>> Hello again!
>>
>> After some tweaking the code
On Tue, Oct 16, 2018 at 7:26 PM Weizhuo Wang wrote:
> Hello again!
>
> After some tweaking the code is giving right answers now. However it start
> to disagree with MATLAB results ('traditional' way using matrix inverse)
> when the grid is larger than 100*100. My PhD advisor and I suspects that
Hello again!
After some tweaking the code is giving right answers now. However it start
to disagree with MATLAB results ('traditional' way using matrix inverse)
when the grid is larger than 100*100. My PhD advisor and I suspects that
the default dimension of the Krylov subspace is 100 in the test
To reiterate what Matt is saying, you seem to have the exact solution on a
10x10 grid. That makes no sense unless the solution can be represented
exactly by your FE space (eg, u(x,y) = x + y).
On Mon, Oct 8, 2018 at 9:33 PM Matthew Knepley wrote:
> On Mon, Oct 8, 2018 at 9:28 PM Weizhuo Wang
On Mon, Oct 8, 2018 at 9:28 PM Weizhuo Wang wrote:
> The code is attached in case anyone wants to take a look, I will try the
> high frequency scenario later.
>
That is not the error. It is superconvergence at the vertices. The real
solution is trigonometric, so your
linear interpolants or
The code is attached in case anyone wants to take a look, I will try the
high frequency scenario later.
On Mon, Oct 8, 2018 at 7:58 PM Mark Adams wrote:
>
>
> On Mon, Oct 8, 2018 at 6:58 PM Weizhuo Wang wrote:
>
>> The first plot is the norm with the flag -pc_type lu with respect to
>> number
On Mon, Oct 8, 2018 at 6:58 PM Weizhuo Wang wrote:
> The first plot is the norm with the flag -pc_type lu with respect to
> number of grids in one axis (n), and the second plot is the norm without
> the flag -pc_type lu.
>
So you are using the default PC w/o LU. The default is ILU. This will
On Tue, Oct 2, 2018 at 5:26 PM Weizhuo Wang wrote:
> I didn't specify a tolerance, it was using the default tolerance. Doesn't
> the asymptoting norm implies finer grid won't help to get finer solution?
>
There are two things going on in your test, discretization error controlled
by the grid,
I didn't specify a tolerance, it was using the default tolerance. Doesn't
the asymptoting norm implies finer grid won't help to get finer solution?
Mark Adams 于2018年10月2日周二 下午4:11写道:
>
>
> On Tue, Oct 2, 2018 at 5:04 PM Weizhuo Wang wrote:
>
>> Yes I was using one norm in my Helmholtz code,
On Tue, Oct 2, 2018 at 5:04 PM Weizhuo Wang wrote:
> Yes I was using one norm in my Helmholtz code, the example code used 2
> norm. But now I am using 2 norm in both code.
>
> /*
> Check the error
> */
> ierr = VecAXPY(x,-1.0,u); CHKERRQ(ierr);
> ierr = VecNorm(x,NORM_1,);
Yes I was using one norm in my Helmholtz code, the example code used 2
norm. But now I am using 2 norm in both code.
/*
Check the error
*/
ierr = VecAXPY(x,-1.0,u); CHKERRQ(ierr);
ierr = VecNorm(x,NORM_1,); CHKERRQ(ierr);
ierr = KSPGetIterationNumber(ksp,); CHKERRQ(ierr);
ierr =
On Tue, Oct 2, 2018 at 2:24 PM Weizhuo Wang wrote:
> The example code and makefile are attached below. The whole thing started
> as I tried to build a Helmholtz solver, and the mean error (calculated by:
> sum( | numerical_sol - analytical_sol | / analytical_sol ) )
>
This is a one norm. If you
The example code and makefile are attached below. The whole thing started
as I tried to build a Helmholtz solver, and the mean error (calculated by:
sum( | numerical_sol - analytical_sol | / analytical_sol ) ) increases as I
use finer and finer grids. Then I looked at the example 12 (Laplacian
As Matt said you should see the initial 2-norm residual asymptote to a
constant with scaling, but it will rise.
I prefer the max norm for this reason. You can use -ksp_monitor_max, but
this does compute an extra residual, apparently, but I don't understand why
it should ...
If this does not
Please send your version of the example that computes the mean norm of the
grid; I suspect we are talking apples and oranges
Barry
> On Oct 1, 2018, at 7:51 PM, Weizhuo Wang wrote:
>
> I also tried to divide the norm by m*n , which is the number of grids, the
> trend of norm still
On Mon, Oct 1, 2018 at 8:51 PM Weizhuo Wang wrote:
> I also tried to divide the norm by m*n , which is the number of grids, the
> trend of norm still increases.
>
We need to be precise. First, look at the initial residual, because that is
what you control with the initial
guess. You are saying
I also tried to divide the norm by m*n , which is the number of grids, the
trend of norm still increases.
Thanks!
Weizhuo
Matthew Knepley 于2018年10月1日周一 下午7:45写道:
> On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang wrote:
>
>> Hi!
>>
>> I'm recently trying out the example code provided with the KSP
On Mon, Oct 1, 2018 at 6:31 PM Weizhuo Wang wrote:
> Hi!
>
> I'm recently trying out the example code provided with the KSP solver
> (ex12.c). I noticed that the mean norm of the grid increases as I use finer
> meshes. For example, the mean norm is 5.72e-8 at m=10 n=10. However at
> m=100,
23 matches
Mail list logo