I've been thinking that since what I want is to have each processor do
their work in their mesh partition one processor at a time, I could
serialize the work. I think I could loop over all processors, include a
conditional that only lets them do the work if the loop index coincides
with the processor rank, and include a barrier at the end of the loop to
make them work sequentally one at a time. Is this a bad idea?

Miguel


On Mon, Sep 1, 2014 at 9:55 AM, Miguel Angel Salazar de Troya <
[email protected]> wrote:

> Thanks for your response. I would love to collaborate more with the
> community. I'm enjoying learning how to use this library and I would like
> to help others in the same process.
>
> Each dof is modified simply by adding a small perturbation, then I call
> assemble_qoi because I want to see the effect of that perturbation in only
> one dof. The problem happens that when I run it in parallel. Several
> processors are modifying the solution vector at the same time, therefore,
> when I call assemble_qoi, instead of having just one dof perturbed, I have
> several.
>
> The example in exact_solution.C is what I needed, although there is a
> problem whenever I call assemble_qoi. This function needs the global
> solution and will not see the local copy I can create following the
> aforementioned example. I'm thinking that I will build a similar routine to
> assemble_qoi, but using that copy I've created.
>
> Miguel
>
>
> On Sun, Aug 31, 2014 at 7:42 PM, Roy Stogner <[email protected]>
> wrote:
>
>>
>> On Sun, 31 Aug 2014, Miguel Angel Salazar de Troya wrote:
>>
>>  I apologize for the big amount of questions I'm asking lately. Clearly, a
>>> deadline is looming...
>>>
>>
>> I apologize for the questions that have (and probably will) go
>> unanswered.  There's a catch-22 wherein people qualified to answer
>> complex questions are also people who don't have time to answer
>> complex questions.  If you want to write up FAQ entries or pay it
>> forward on libmesh-users yourself someday then we'll call it even.
>>
>>
>>  I'm trying to modify each component of the solution vector of a
>>> system in parallel, and then evaluate the qoi by calling the
>>> function assemble_qoi.
>>>
>>
>>  The problem is that if I do this in parallel, I'm actually modifying more
>>> than one component of the solution vector, therefore I'm not obtaining
>>> the
>>> value that I would like to get in assemble_qoi.
>>>
>>
>> I don't think I'm understanding this, but let me try answering a few
>> different interpretations of the question:
>>
>> If you're trying to modify every part of the solution vector in
>> parallel and you can decompose things onto subdomains additively,
>> that's easy; take a look at what typical assembly functions do.
>>
>> If the support for the function defining each new dof value falls
>> within a processor's partition plus ghost elements, but it doesn't
>> decompose additively, you'll want to use set() instead of add(), but
>> you'll want to test each dof first to see if it's "owned" by the local
>> processor.  Take a look at the System::project_vector implementations
>> for this case.
>>
>> If the support for the function defining each new dof value falls
>> outside a single layer of ghost elements, and you're under a deadline,
>> then you're probably best off serializing.  See what we do in
>> exact_solution.C in the _equation_systems_fine case for an example.
>> ---
>> Roy
>>
>
>
>
> --
> *Miguel Angel Salazar de Troya*
>
> Graduate Research Assistant
> Department of Mechanical Science and Engineering
> University of Illinois at Urbana-Champaign
> (217) 550-2360
> [email protected]
>
>


-- 
*Miguel Angel Salazar de Troya*
Graduate Research Assistant
Department of Mechanical Science and Engineering
University of Illinois at Urbana-Champaign
(217) 550-2360
[email protected]
------------------------------------------------------------------------------
Slashdot TV.  
Video for Nerds.  Stuff that matters.
http://tv.slashdot.org/
_______________________________________________
Libmesh-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/libmesh-users

Reply via email to