Ok I see that I can just use flag by error tolerance…  

On 2/16/17, 10:37 AM, "Salazar De Troya, Miguel" <[email protected]> wrote:

    I thought that a Dorfler strategy would be more favorable to situations 
with a singularity, where the error in certain regions is more dominant. If we 
refine a fraction of the elements, we would keep refining elements with errors 
that are not as large as the ones in the singularity, whereas with Dorfler, 
more effort is concentrated in the singularity.
    
    Miguel
    
    From: <[email protected]> on behalf of Vikram Garg 
<[email protected]>
    Date: Tuesday, February 14, 2017 at 11:26 AM
    To: "Paul T. Bauman" <[email protected]>
    Cc: "Salazar De Troya, Miguel" <[email protected]>, 
"[email protected]" <[email protected]>
    Subject: Re: [Libmesh-users] Criteria for marking strategy
    
    Hello Salazar,
                          I would think that the importance of using an optimal 
marking strategy such as Dorfler depends on how difficult you think it will be 
to see gains with AMR. For example, if your underlying solution does not have 
singularities or boundary layers, it becomes more important to try and equalize 
errors across elements, make the most of each new dof added and so on.
    
    However, if the underlying solution does have such sharp features, I would 
not expect to see many gains over using an optimal marking strategy over a more 
heuristic one like refine_frac or the others which libMesh currently supports. 
In such problems, the solution can effectively be seen as a sum of a singular 
and non-singular component, and it is the equalization of error in these two 
components that we seek first. And AMR should be able to do this without pretty 
well needing sophisiticated marking strategies.
    
    The interesting question whether this holds for goal oriented refinement as 
well. Note that for non-singular/boundary-layer problems/adjoint systems, we 
expect the QoI to converge at twice the rate of the global H1 solution, with 
the usual Galerkin (non-stabilized) formulations. Here again, goal-oriented AMR 
will only offer an improvement in the convergence constant, but for an error 
that is already decreasing fast, and I would again expect Dorfler type marking 
strategies to become more important. But for problems where the forward or 
adjoint problem has singular behaviour, we have seen tremendous benefits from 
AMR using the marking strategies already present in libMesh.
    
    I am admittedly not well versed with the optimal marking strategies, so if 
anyone can pick holes in these arguments, please do so.
    
    Thanks.
    
    On Tue, Feb 14, 2017 at 11:35 AM, Paul T. Bauman 
<[email protected]<mailto:[email protected]>> wrote:
    On Mon, Feb 13, 2017 at 6:03 PM, Salazar De Troya, Miguel <
    [email protected]<mailto:[email protected]>> wrote:
    
    >
    > I’ve read in several papers that in AMR the most common marking strategy
    > is Dorfler marking, which is mathematically grounded.
    
    
    There is some theory that, under some other regularity assumptions (that
    can be proven for quite a few problems), Dorfler marking provides
    optimality for the adaptive refinement process. I admit I'm not deeply
    familiar with the theory. Carsetensen's paper (
    https://arxiv.org/pdf/1312.1171.pdf) and Nochetto's review paper (
    http://www-users.math.umd.edu/~rhn/lectures/adaptivity.pdf) are probably
    good starting points (both have been on my "to read" list for awhile now).
    
    
    > However, I have not seen this implemented in libMesh.
    
    
    It looks as though it's not. Patches welcome! (I would envision this would
    be dropped in MeshRefinement as flag_elements_by_dorfler() or some such.)
    
    
    > I believe that this marking strategy does not consider the coarsening
    > portion of the refinement.
    
    
    I believe there might be some extensions through brief Googling, but I
    haven't read anything in detail. Nevertheless, it is noted that coarsening
    is really only important for unsteady problems as, theoretically, optimal
    adaptive refinement for steady problems does not require coarsening.
    
    
    > On which theories are the current marking strategies libMesh implements
    > based?
    >
    
    Maximum strategies, i.e. refine some fraction of the max (which goes back
    to Babuska).
    
    Best,
    
    Paul
    
------------------------------------------------------------------------------
    Check out the vibrant tech community on one of the world's most
    engaging tech sites, SlashDot.org! http://sdm.link/slashdot
    _______________________________________________
    Libmesh-users mailing list
    
[email protected]<mailto:[email protected]>
    https://lists.sourceforge.net/lists/listinfo/libmesh-users
    
    
    
    --
    Vikram Garg
    Postdoctoral Associate
    The University of Texas at Austin
    
    http://vikramvgarg.wordpress.com/
    http://www.runforindia.org/runners/vikramg
    
------------------------------------------------------------------------------
    Check out the vibrant tech community on one of the world's most
    engaging tech sites, SlashDot.org! http://sdm.link/slashdot
    _______________________________________________
    Libmesh-users mailing list
    [email protected]
    https://lists.sourceforge.net/lists/listinfo/libmesh-users
    

------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, SlashDot.org! http://sdm.link/slashdot
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to