On Jan 5, 2012, at 6:12 PM, Ravi Kannan wrote:

> Hi Mark,
>  
>  
> From: petsc-dev-bounces at mcs.anl.gov [mailto:petsc-dev-bounces at 
> mcs.anl.gov] On Behalf Of Mark F. Adams
> Sent: Wednesday, January 04, 2012 12:09 PM
> To: For users of the development version of PETSc
> Subject: Re: [petsc-dev] boomerAmg scalability
>  
> Is your problem unsymmetric?  I don't think this is the problem, it was fixed 
> recently.
>  
> If not, can you run with -pc_gamg_verbose and -info and send me the output?
> I have attached the verbose+info outputs for both the serial and the parallel 
> (2 partitions). NOTE: the serial output at some location says PC=Jacobi! Is 
> it implicitly converting the PC to a Jacobi?

That is an error in the print out.
Mark
>  
> You don't need to set the block size if it is 1.
>  
> I have a feeling it has to do with not setting the buffer sizes correctly.  
> There are some heuristics in there and this needs to be improved.  What kind 
> of discretization are you using?  What order, how many non-zeros per row?  
> Maybe try the smallest case that you have with more processors.  I have a 
> feeling it is trashing around mallocing stuff in MatSetValues so it just 
> "looks" hung.
>  
> Mark
>  
> Thanks,
> Ravi.
>  
> On Jan 4, 2012, at 12:14 PM, Ravi Kannan wrote:
> 
> 
> Hi Mark, Matt,
>  
> We recently downloaded the petsc development version, to test the gamg 
> package.
>  
> This works in serial : we tried for small cases. The parallel case (even with 
> 2 partitions) just hangs. As of now, we have not set any parameters. So I 
> guess that the default parameters are being used.
>  
> Do we need to explicitly set the block size to MatSetBlockSize(mat,1) for a 
> PARALLEL run? Our solver solves U,V,W and P separately.
>  
> Any input on this would be great.
>  
> Thanks,
> Ravi.
>  
> From: petsc-dev-bounces at mcs.anl.gov [mailto:petsc-dev-bounces at 
> mcs.anl.gov] On Behalf Of Mark F. Adams
> Sent: Thursday, December 15, 2011 1:18 PM
> To: For users of the development version of PETSc
> Subject: Re: [petsc-dev] boomerAmg scalability
>  
>  
> On Dec 15, 2011, at 1:56 PM, Matthew Knepley wrote:
> 
> 
> 
> On Thu, Dec 15, 2011 at 10:23 AM, Ravi Kannan <rxk at cfdrc.com> wrote:
> Dear All,
>  
> This is Ravi Kannan from CFD Research Corporation. Recently, we are 
> experimenting with the BoomerAMG preconditioner for some ?stiff? CFD 
> problems. In that regard, all the other standard solver-preconditioner 
> combinations failed for the current CFD problem. The boomer is the only one 
> which is able to provide with ?converged? solutions.
>  
> We noticed that the scalability of this boomer preconditioner is really poor. 
> For instance, even with a cell size of 2 million, we cannot scale to even 16 
> partitions (in contrast, the other solver-preconditioner combinations like 
> the BI-CGS/BJacobi gave good enough scalability).
>  
> Are we missing something? Do we need to use a more latest version of boomer?
>  
> Have you tried -pc_type gamg in petsc-dev?
>  
> For gamg you also want to use MPIAIJ matrices and set the block size 
> MatSetBlockSize(mat,3), for a 3D velocity field, for instance.  You can also 
> try '-pc_gamg_type pa' or '-pc_gamg_type sa'.  "pa", for plain aggregation 
> might be better for CFD problems.
>  
> Mark
> 
> 
> 
>  
>   Matt
>   
> Thanks,
> Ravi.
>  
> 
>  
> -- 
> What most experimenters take for granted before they begin their experiments 
> is infinitely more interesting than any results to which their experiments 
> lead.
> -- Norbert Wiener
>  
>  

-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120105/07efba5c/attachment.html>

Reply via email to