Hi everyone,
 currently NonMatching::coupling compute sparsity/matrix make a big 
computational waste: the first half of both functions, up to

const auto cpm = GridTools::compute_point_locations(cache, all_points);
>

is the same in both functions. That means we're repeating twice a "big" 
computation. This is even more apparent in parallel settings, where
the loop to generate all_points is quite long and complicated (even with 
https://github.com/dealii/dealii/blob/master/source/non_matching/coupling.cc 
).

Storing "cpm" (output from the sparsity function) and then re-using it in 
the matrix function would be ideal.

My question is, what is the best way to do so?

One possibility is to modify the functions' interfaces (e.g. make sparsity 
return this tuple, modify the arguments of generate matrix), but I fear 
this would
break a number of things in deal.lI

The other "natural" option would be to add a "temporary basket" in Cache, 
which allows functions to store data and flag it as "theirs. I believe this
behaviour is in the "spirit" of Cache and would probably turn out to be 
useful elsewhere.

After a quick google search I think I'd try to add a few elements like this 
to the cache:

std::map< unsigned int, std::vector<boost::any>>

Where the unsigned int is a tag used by functions to "mark" what's inside 
the vector. Would a solution like this be ok? Are there better alternatives?

What about the update flags? This information can't be computed by Cache, 
so would it be ok to make it so the update simply resets this map?
Or should we return the map anyway (plus an "old" tag)?

Best,
Giovanni

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to