I have done supg in the past with local time stepping. I assign a different dt for each node and use that dt when evaluating all element residual/jacobian information for that node. It's trivially easy. Just beware of errors I've made along the way: element-wise timestep is a horrible idea for continuos FE approximations, and make sure you get a consistent dt for nodes on processor boundaries.
Scheme works well in practice for steady problems. I guess if time accuracy is important you'd be looking to imbed this in a dual-time scheme because of Roy's concerns? As for your concern about a bug - lets get together offline and maybe I can run your mesh with some various options and report my experience... -Ben On Apr 24, 2013, at 10:40 AM, "Manav Bhatia" <[email protected]> wrote: > I might have to respond with a half-ignorant rant from my end too, since I > am just getting started with the literature. I am using two domain > decomposition books as reference: one by Quarternoni and Valli and the > other by Toselli and Wildund. > > > My motivation is the second point in your message: I am pseudo-time > stepping towards a steady solution. My application is inviscid transonic > flow simulation on a swept wing using GLS method. > > This goes back to my message about the linear solver convergence last week: > h-refinement leads to a point where the linear solver refuses to converge. > I have tried a lot of options (modifying the GMRES restart iteration to > 1000, ASM preconditioners that Jed had suggested, reducing the dt post > refinement to as low as 1% of original value, etc.) but none have worked > for me so far. I have not tried modifying my "tau" matrix, though. > > On one side, I am a little perplexed as to why others have not faced this > issue: perhaps there is a bug in my code, perhaps the nature of transonic > flow makes it a difficult problem, perhaps it is a weakness in GLS, or > other reasons. I doubt there is a bug though, since there hasn't been an > error in solution so far in all other simulations that I have done. > > On the other side, I feel like chopping up the matrix for linear solve > might lead to a set of separate and better conditioned linear solves. > Hence, I am looking at walking down this path. > > I do not yet know of the challenges you pointed out, but from what I have > read in the books so far, it seems possible to setup appropriate Dirichlet > and Neumann BCs at the interfaces to enable consistent solution for > different kinds of physics. Ofcourse, now one needs to iterate between > domains till convergence. > > On a related note, I have a feeling that the latest addition of separate > parallel communicators in 0.9.1 might come in handy. > > > Manav > > > > On Wed, Apr 24, 2013 at 1:13 PM, Roy Stogner <[email protected]>wrote: > >> >> On Wed, 24 Apr 2013, Manav Bhatia wrote: >> >> Has anyone attempted space varying dt for time stepping problems using >>> libMesh? >> >> No, but we've got an application where it might be a decent idea. >> >> Half-ignorant rant: >> >> I'm skeptical, though. Space varying dt is ideal if you're doing a >> time-accurate solve of a hyperbolic problem, or if you can do >> operator-splitting and limit the space varying dt to the explicit >> operator(s) in a parabolic problem, but I've never seen how you can do >> implicit space-varying dt in a time-accurate way on parabolic problems >> without adding more DoFs to each space-time slab and so canceling out >> most of your benefits. >> >> What other implicit hypersonics people do with space-varying dt seems >> to be limiting it to non-time-accurate solves, where you're just >> pseudo time stepping to get to a quasi steady state. Which is fine, >> we do pseudo time stepping too... except that I think the right thing >> in this case may be to go coarser in time *and* space; if you're >> basically using the non-time-accurate parts of your solve just to get >> the shock moved into place so you can use larger dt, you might as well >> do most of that movement on a coarse grid. >> --- >> Roy > ------------------------------------------------------------------------------ > Try New Relic Now & We'll Send You this Cool Shirt > New Relic is the only SaaS-based application performance monitoring service > that delivers powerful full stack analytics. Optimize and monitor your > browser, app, & servers with just a few lines of code. Try New Relic > and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr > _______________________________________________ > Libmesh-users mailing list > [email protected] > https://lists.sourceforge.net/lists/listinfo/libmesh-users ------------------------------------------------------------------------------ Try New Relic Now & We'll Send You this Cool Shirt New Relic is the only SaaS-based application performance monitoring service that delivers powerful full stack analytics. Optimize and monitor your browser, app, & servers with just a few lines of code. Try New Relic and get this awesome Nerd Life shirt! http://p.sf.net/sfu/newrelic_d2d_apr _______________________________________________ Libmesh-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/libmesh-users
