Dear Barry

this is the general tenor I have seen so far, but can I get a more 
tangible answer somewhere? Something that says "if you have more than X 
unkowns / Y interface unknowns / the number of (preconditioned) 
iterations is smaller than Z you should do an iterative solve"? Even 
better something that says "across slow networks"? I could not yet find 
anything concrete.

Best regards
Stefan

PS. Just to make sure, I used the term "grid" to denote a 
loosely-coupled computer system, not a computational mesh.

Am 29.12.2012 22:21, schrieb Barry Smith:
>    My off the cuff response is that "computing the exact Schur complements 
> for the subdomains is sooooo expensive that it swamps out any savings in 
> reducing the amount of communication" plus it requires soooo much memory. 
> Thus solvers like these may make sense only when the problem is 
> "non-standard" enough that iterative methods simply don't work (perhaps due 
> to extreme ill-conditioning), such problems do exist but for most "PDE" 
> problems with enough time and effort one can cook up the right combination of 
> "block-splittings" and multilevel (multigrid) methods to get a much more 
> efficient solver that gives you the accuracy you need long before the Schur 
> complements have been computed.
>
>     Barry
>
> On Dec 29, 2012, at 12:59 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>
>> Sorry for the slow reply. What you are describing _is_ multifrontal 
>> factorization, or alternatively, (non-iterative) substructuring. It is a 
>> direct solve and boils down to a few large dense direct solves. Incomplete 
>> factorization is one way of preventing the Schur complements from getting 
>> too dense, but it's not very reliable.
>>
>> There are many other ways of retaining structure in the supernodes (i.e., 
>> avoid unstructured dense matrices), at the expense of some error. These 
>> methods "compress" the Schur complement using low-rank representations for 
>> long-range interaction. These are typically combined with an iterative 
>> method.
>>
>> Multigrid and multilevel DD methods can be thought of as an alternate way to 
>> compress (approximately) the long-range interaction coming from inexact 
>> elimination (dimensional reduction of interfaces).
>>
>> On Wed, Dec 19, 2012 at 10:25 AM, Stefan Kurzbach <stefan.kurzbach at 
>> tuhh.de> wrote:
>> Hello everybody,
>>
>>   
>>
>> in my recent research on parallelization of a 2D unstructured flow model 
>> code I came upon a question on domain decomposition techniques in ?grids?. 
>> Maybe someone knows of any previous results on this?
>>
>>   
>>
>> Typically, when doing large simulations with many unknowns, the problem is 
>> distributed to many computer nodes and solved in parallel by some iterative 
>> method. Many of these iterative methods boil down to a large number of 
>> distributed matrix-vector multiplications (in the order of the number of 
>> iterations). This means there are many synchronization points in the 
>> algorithms, which makes them tightly coupled. This has been found to work 
>> well on clusters with fast networks.
>>
>>   
>>
>> Now my question:
>>
>> What if there is a small number of very powerful nodes (say less than 10), 
>> which are connected by a slow network, e.g. several computer clusters 
>> connected over the internet (some people call this ?grid computing?). I 
>> expect that the traditional iterative methods will not be as efficient here 
>> (any references?).
>>
>>   
>>
>> My guess is that a solution method with fewer synchronization points will 
>> work better, even though that method may be computationally more expensive 
>> than traditional methods. An example would be a domain composition approach 
>> with direct solution of the Schur complement on the interface. This requires 
>> that the interface size has to be small compared to the subdomain size. As 
>> this algorithm basically works in three decoupled phases (solve the 
>> subdomains for several right hand sides, assemble and solve the Schur 
>> complement system, correct the subdomain results) it should be suited well, 
>> but I have no idea how to test or otherwise prove it. Has anybody made any 
>> thoughts on this before, possibly dating back to the 80ies and 90ies, where 
>> slow networks were more common?
>>
>>   
>>
>> Best regards
>>
>> Stefan
>>
>>   
>>
>>   
>>
>>

Reply via email to