Ok. I got it.
>> If you want remote DoF values in parallel you have to sync or serialize
If I understood, this serialization should be done upfront once, before
iterating through the elements.
As in introduction_ex4, it seems that the parallelization is confined in
"solve", which returns only after the job is all done.
I should then serialize to the master node just after the "solve".
Is that right?
>> if you're not writing speed-critical code then
Speed is not critical so far, but will become soon.
I will look at FEMContext. Thanks.
>> No problem, but the price is that we keep discussion on-list,
Sorry for that. Reply vs. Reply_all problem.
Thanks once again.
On Thu, Aug 10, 2017 at 1:54 PM, Roy Stogner <royst...@ices.utexas.edu>
> On Thu, 10 Aug 2017, Renato Poli wrote:
> Now I am trying to solve the Poisson equation (as if there were no
>> storage and no compressibility in the media) and to plot the results
>> in my GUI.
>> As I understood from your notes, something like that is going to work -
>> | foreach e=element in mesh
>> | foreach n=node in e
>> | dof = n.dof_number(1,0,0)
> Assuming that you're trying to get dof indices for the *first*
> variable in the *second* System in your EquationSystems, yes. If
> you've still just got one System then you want dof_number(0,0,0).
> | node_pressure = system.solution(dof)
> In serial, close - solution is a UniquePtr so you need
> (*system.solution)(dof) to use it. In parallel,
> (*system.solution)(dof) is undefined behavior (or an assertion falure
> in dbg/devel modes) unless that node is owned by the processor calling
> it. Using system.current_solution(dof) gets you (the last cached copy
> of) ghosted node DoF values too. If you want remote DoF values in
> parallel you have to sync or serialize them yourself.
> How to use the shape function to interpolate inside the element?
> There's about 5 different ways to do that, I'm afraid, and tradeoffs
> for flexibility vs efficiency vs brevity. I build an FEMContext in
> most cases, but if you're not writing speed-critical code then
> System::point_value(variable_number, point, element) is the easiest
> If you allow me (sorry to bother you this much):
> No problem, but the price is that we keep discussion on-list, because:
> I didn't understand completely the parallelization strategy of libmesh.
>> Do you have any readings to suggest?
> No great ones. I'm afraid that just reading through some of the
> example codes as a tutorial and then searching through
> with questions is how most of our users get up to speed.
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
Libmesh-users mailing list