> Hi,
> I got quite far with my project, although I still have not managed (or
> better "have not tried...") to get the parallelization running (Shri:
> Any news about that?).
We've figured out what needs to be done but haven't done it yet :-). Your 
application needs either a vertex distribution with an overlap or a custom MPI 
reduction scheme. After speaking with Barry last week, it seems to me that the 
latter option would be best way to proceed. The custom MPI reduction scheme is 
because you have 2 equations for every vertex with the first equation needing 
an ADD operation and the second one needs a PROD. Thus, we would need
to have an ADD_PROD insertmode for DMLocalToGlobalXXX that we currently don't 
have.

> Now I would like to add a single global variable (and a single equation)
> to the equation system. Is there an elegant way to do this with DMCircuit?

Is this akin to a "Ground" node for circuits? Is the variable value constant?

After working on your example I realized that specifying a bidirectional edge 
as two unidirectional edges in the data may cause problems for the partitioner. 
I observed that
the two undirectional edges may be assigned to different processors although 
they are connected to the same vertices. This may be a problem when 
communicating ghost
values. Hence, I've modified the data format in the attached links1.txt file to 
only specify edges via their nodal connectivity and then to specify the type 
information.
I've reworked your source code also accordingly and it gives the same answer as 
your original code. It gives a wrong answer for parallel runs because of the 
incorrect
ghost value exchanges. Once we have the ADD_PROD insertmode, this code should 
work fine in parallel too. I think that going forward you should use a similar 
data format.


>
> A hackish solution might be to add an additional imaginary vertex that
> is excluded from all other calculations, but that does not seem to be
> the right way to do it.
>
> Greetings,
> Florian

Attachment: aloha1.cpp
Description: aloha1.cpp

6 7
0.1
0.1
0.1
0.1
0.1
0.1

0 1
2 3
4 5
2 0
2 1
3 4
3 5

1
1
1
0
0
0
0

Reply via email to