On 16/03/15 16:51, Jørgen Kvalsvik wrote:
On 03/16/2015 04:43 PM, Bård Skaflestad wrote:
As for the overall construction approach, it basically boils down to
which is more expensive:
* Having two copies of the matrix in memory and re-forming
the (unchanging) sparsity structure on each call to solve()
* Forming the sparsity structure once and doing per-coefficient
updates
The latter has an O(log (number of non-zeros per row)) time complexity
for each coefficient and the former consumes more memory. On the other
hand, if done correctly, the former approach has an essentially optimal
overall time complexity of
O(total number of non-zero elements)
so that's an attractive property.
As far as I can tell it's the former that's been implemented now, as we
keep around the sparsity pattern and just zero it between calls to
solve.
We currently implement the second approach. The matrix sparsity
(connection) structure does not change between calls to solve(),
although the numerical values of the coefficients certainly change.
We define the connection structure in method
initSystemStructure()
which calls the helper methods
enumerateDof(g, bc)
allocateConnections(bc)
setConnections(bc)
Method setConnections() is the only function that explicitly calls
'endindices()' on the matrix object meaning that setConnections() is the
only arbiter of a valid, fully formed sparsity structure.
That said, in the subsequent matrix assembly stage some of the
structurally non-zero elements from the initial connection analysis may
turn out to be numerically zero, but that does not affect the formed
sparsity structure of the underlying matrix object. It "just" means
that we will carry a hopefully small number of redundant zeros in the
matrix when solving the assembled system of simultaneous linear equations.
Sincerely,
--
Bård Skaflestad <[email protected]>
SINTEF ICT, Applied Mathematics
_______________________________________________
Opm mailing list
[email protected]
http://www.opm-project.org/mailman/listinfo/opm