Hi, Now I'm comparing at every time step. The A matrix and rhs of the NSPCG and PETSc matrix are exactly the same, NORM is zero. The solutions given by both solvers are very similar for e.g. from 1-315 time step, and varying slowly since it is a oscillating body problem. Then strangely, at time=316, PETSc's answer suddenly differs by a great difference. e.g. p=0.3 to 50. I must also add that the flowfield at that time step still "looks ok". However, the large pressure change of the pressure poisson eqn caused the computation of the lift/drag coefficient to be erroneous. Moreover, after that, the pressure slowly "returns back" to the same answer such that of the NSPCG solver after a few time steps ... then after a while the sudden change happens again. In the end, I got a cl/cd vs time graph with lots of spikes.
I hope that the thing can be solved because the NSPCG solver is much slower and it is not as stable. Thanks Barry Smith wrote: > Ben, > > Are you comparing the two matrices at timestep 316? Also compare the > two right hand sides at 316. Are they similar, totally different?? > > Barry > > > On Sat, 4 Aug 2007, Ben Tay wrote: > > >> Hi, >> >> After using MatAXPY, I used MatGetRowMaxAbs, VecAb and VecMax to get the row >> with max value, get the absolute of that value and print out the >> location/value. I managed to find some minor mistakes and corrected them so >> that the 2 matrix are identical. Unfortunately, the same thing still happens! >> >> I've checked for convergence and the iteration no. is 1 and 8 for direct >> solving and HYPRE's AMG respectively. So there's no convergence problem. >> I've >> also tried to change to the same pc/ksp as that of NSPCG, which is jacobi >> presconditioning + KSPBCGS but the same thing happens. >> >> Any idea what's happening? >> >> Thanks >> >> Barry Smith wrote: >> >>> Unless the matrix values are huge then this is a big difference. All you >>> can do is print the difference matrix with ASCII MatView and >>> look for large entries. This will tell you the locations of the trouble >>> entries. >>> >>> Barry >>> >>> >>> On Sat, 4 Aug 2007, Ben Tay wrote: >>> >>> >>> >>>> Hi, >>>> >>>> I realised that the matrix storage format of NSPCG is actually ITPACK >>>> storage >>>> format. Anyway, I tried to insert the values into a PETSc matrix, use >>>> MatAXPY >>>> with the scalar a = -1 and then use MatNorm on the output matrix. >>>> >>>> Using NORM_1, NORM_FROBENIUS and NORM_INFINITY >>>> <../Vec/NORM_FROBENIUS.html#NORM_FROBENIUS>, I got ~543, 3194 and 556 >>>> respectively. Does this mean that the matrix are different? How "much" >>>> different does that means? >>>> >>>> So what is the next step? Is there anyway to find out the location of the >>>> value which is different? >>>> >>>> This is my comparison subroutine: >>>> >>>> subroutine matrix_compare >>>> >>>> !compare big_A & PETSc matrix >>>> >>>> integer :: k,kk,II,JJ,ierr >>>> >>>> Vec xx_test,b_rhs_test >>>> Mat A_mat_test >>>> >>>> PetscScalar aa >>>> >>>> PetscReal nrm >>>> >>>> aa=-1. >>>> >>>> call >>>> MatCreateSeqAIJ(PETSC_COMM_SELF,total_k,total_k,10,PETSC_NULL_INTEGER,A_mat_test,ierr) >>>> >>>> call VecCreateSeq(PETSC_COMM_SELF,total_k,b_rhs_test,ierr) >>>> >>>> call VecDuplicate(b_rhs_test,xx_test,ierr) >>>> >>>> call VecAssemblyBegin(b_rhs_test,ierr) >>>> >>>> call VecAssemblyEnd(b_rhs_test,ierr) >>>> >>>> call VecAssemblyBegin(xx_test,ierr) >>>> >>>> call VecAssemblyEnd(xx_test,ierr) >>>> >>>> do k=1,total_k >>>> do kk=1,10 >>>> >>>> II=k-1 >>>> >>>> JJ=int_A(k,kk)-1 >>>> call >>>> MatSetValues(A_mat_test,1,II,1,JJ,big_A(k,kk),INSERT_VALUES,ierr) >>>> call VecSetValue(b_rhs_test,II,q_p(k),INSERT_VALUES,ierr) >>>> end do >>>> >>>> end do >>>> >>>> ! call MatCopy(A_mat,A_mat_test,DIFFERENT_NONZERO_PATTERN ,ierr) >>>> >>>> call MatAssemblyBegin(A_mat_test,MAT_FINAL_ASSEMBLY,ierr) >>>> call MatAssemblyEnd(A_mat_test,MAT_FINAL_ASSEMBLY,ierr) >>>> >>>> call MatAXPY(A_mat_test,aa,A_mat,SAME_NONZERO_PATTERN,ierr) >>>> >>>> call MatNorm(A_mat_test,NORM_INFINITY,nrm,ierr) >>>> >>>> print *, nrm >>>> >>>> end subroutine matrix_compare >>>> >>>> Thanks! >>>> >>>> >>>> >>>> Barry Smith wrote: >>>> >>>> >>>>> You can use MatCreateSeqAIJWithArrays() to "convert" the NSPCG matrix >>>>> into >>>>> PETSc format and then MatAXPY() to difference them and then MatNorm() to >>>>> see >>>>> how large the result is. Make sure the PETSc or hypre solver is always >>>>> converging. Run with >>>>> -ksp_converged_reason and or -ksp_monitor. My guess is that the matrix >>>>> is >>>>> becoming very ill-conditioned so the solvers with the default options >>>>> are >>>>> failing. >>>>> >>>>> Barry >>>>> >>>>> >>>>> On Fri, 3 Aug 2007, Ben Tay wrote: >>>>> >>>>> >>>>> >>>>>> Hi, >>>>>> >>>>>> I used 2 packages to solve my poisson eqn, which is part of my NS >>>>>> unsteady >>>>>> solver. One is the NSPCG solver package which uses the compressed row >>>>>> format >>>>>> to store the A matrix. The other is PETSc. I found that using both >>>>>> solvers >>>>>> gave me similar answers for a number of time step. However, after >>>>>> that, >>>>>> the >>>>>> answer will suddenly change drastically for the PETSc case. This does >>>>>> not >>>>>> happen for the NSPCG solver. >>>>>> >>>>>> For e.g. time step 1-315, oscillating airfoil case, pressure changes >>>>>> smoothly, >>>>>> similar answers in both cases >>>>>> >>>>>> at time=316, pressure changes from -3.22 to -3.2 for NSPCG, but >>>>>> pressure >>>>>> changes from -3.21 to -60.2 for PETSc >>>>>> >>>>>> This happens when I use HYPRE's AMG or PETSc's direct solver LU. >>>>>> >>>>>> I have been trying to find out what's the cause and I can't find the >>>>>> answer in >>>>>> debugging. I would like to compare the values of the matrix of the 2 >>>>>> different >>>>>> solvers and see if there's any difference. However, NSPCG's matrix is >>>>>> in >>>>>> compressed row format while PETSc's one is just an address and it >>>>>> can't be >>>>>> viewed easily. Moreover, it's a big matrix so it's not possible to >>>>>> check >>>>>> by >>>>>> inspection. I'm thinking of subtracting one matrix by the other and >>>>>> find >>>>>> if >>>>>> it's zero. What's the best way to solve this problem? Btw, I'm using >>>>>> fortran >>>>>> and there's no mpi >>>>>> >>>>>> Thank you. >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>> >>>> >>> >>> >> > > >
