Re: [deal.II] Tags and Communicators

2021-02-21 Thread Martin Kronbichler




On 2/19/21 2:38 PM, Timo Heister wrote:

I see additional problems with separate communicators per object:
1. For me it is unclear how operations that involve more than one
object should communicate. For example, a mat-vec has 2 vectors (src,
dst) and a matrix and as such 3 communicators. Of course you can make
an arbitrary choice here, but this doesn't seem clean to me.


That's true, but the point is to avoid different objects communicating 
on the same communicator. Which of the three you mention here is, in 
the end, not important -- in fact, we don't care: Let PETSc or 
Trilinos do what they need to do.


Well, there are parts where we do care, because we have our own parallel 
vectors and parallel matrix-vector products (mostly matrix-free 
functionality). Those cases are linked to the triangulation's 
communicator, which might be the place where duplicating a communicator 
makes sense. One could do it also with a finer granularity inside 
certain collections of vectors, but the crucial ingredient is to make 
data structures aware of the places where objects with different 
communicators come together, like different communicators in the domain 
and range partitioner of a matrix-vector product. It's actually on my 
todo list to use different communicators on coarser levels of the 
multigrid hierarchy, either in terms of different triangulations for 
(global coarsening) or just the levels in the regular h-multigrid with 
local smoothing we have had for a long time.


The important message is that we can't simply go through the library and 
switch over to duplicating communicators for every class where we hand 
in and store a communicator, but, at least for the linear algebra part 
like parallel vectors and MatrixFree, decide on a case-by-case basis so 
that we don't accidentally destroy things.






2. MPI_comm_dup can involve communication and as such can be slow for
large processor counts. Not sure you want to always pay the price for
that. I remember at least one MPI implementation that does an
allreduce (or something worse than that).


That is true too, though one would assume that whatever one ends up 
doing on these communicators is going to be more expensive than the 
duplication.


It depends on what you are doing and on what scale you are running. For 
some MPI implementations, the duplication can be pretty expensive (on 
the order of seconds) once you reach 100k or more ranks, depending on 
how they implement the operation. Imagine what happens if you do an 
MPI_Comm_dup every time you enter a Schur complement approximate inverse 
and request a temporary vector for the local solvers - an operation that 
otherwise scales well down to 1e-3 to 1e-4 seconds. Again, you need a 
case-by-case decision, and everything the below the level of the 
triangulation needs to be audited at least.



It may not be an important issue. Would a compromise be to duplicate 
communicators in the constructors of all main classes in the tutorial, 
as a recommendation of best practices. (This is what ASPECT does, for 
reference.) This doesn't address the issue of what can go wrong when 
using multiple threads from within the main class, but it would at 
least address the issue if someone wanted to run multiple instances of 
the main class in parallel, and may bring the issue to people's mind.


Apart from the disagreement I expressed above, here I completely agree 
with you.


Best,
Martin

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/9dce8c01-158f-30a3-f4aa-0a4f70dc5d3e%40gmail.com.


Re: [deal.II] Tags and Communicators

2021-02-21 Thread Wolfgang Bangerth

On 2/19/21 2:38 PM, Timo Heister wrote:

I see additional problems with separate communicators per object:
1. For me it is unclear how operations that involve more than one
object should communicate. For example, a mat-vec has 2 vectors (src,
dst) and a matrix and as such 3 communicators. Of course you can make
an arbitrary choice here, but this doesn't seem clean to me.


That's true, but the point is to avoid different objects communicating on the 
same communicator. Which of the three you mention here is, in the end, not 
important -- in fact, we don't care: Let PETSc or Trilinos do what they need 
to do.




2. MPI_comm_dup can involve communication and as such can be slow for
large processor counts. Not sure you want to always pay the price for
that. I remember at least one MPI implementation that does an
allreduce (or something worse than that).


That is true too, though one would assume that whatever one ends up doing on 
these communicators is going to be more expensive than the duplication.




3. The number of communicators can be quite limited, see [1]

[1] 
seehttps://nam01.safelinks.protection.outlook.com/?url=https:%2F%2Fwww.mcs.anl.gov%2F~thakur%2Fpapers%2Fmpi-million.pdfdata=04%7C01%7CWolfgang.Bangerth%40colostate.edu%7Ce6d86978f5e643fc78ed08d8d51ed3e0%7Cafb58802ff7a4bb1ab21367ff2ecfc8b%7C0%7C1%7C637493675698569373%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=GwLC%2BU8uHkSXwpa%2F7gyJ9eMM0wl2T%2Bq8qvwYard69TU%3Dreserved=0


I think I read that a couple of years ago. I should probably do that again.

It may not be an important issue. Would a compromise be to duplicate 
communicators in the constructors of all main classes in the tutorial, as a 
recommendation of best practices. (This is what ASPECT does, for reference.) 
This doesn't address the issue of what can go wrong when using multiple 
threads from within the main class, but it would at least address the issue if 
someone wanted to run multiple instances of the main class in parallel, and 
may bring the issue to people's mind.


Best
 W.

--

Wolfgang Bangerth  email: bange...@colostate.edu
   www: http://www.math.colostate.edu/~bangerth/

--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/6adc2f88-7327-9051-9c8c-8c9510720233%40colostate.edu.


Re: [deal.II] Re: Configuring deal.II with LAPACK

2021-02-21 Thread kaleem iqbal
https://github.com/dealii/candi. Best installation link for all libraries
required for deal ii.


On Sat, Feb 20, 2021 at 11:57 PM Giselle Sosa Jones 
wrote:

> Hi Bruno,
>
> Thank you for your reply. I tried this but got the same error as before. I
> am not sure what's happening since I am using PETSc with Intel mkl in other
> software, and it works fine there. Do you happen to have any other
> suggestions? Maybe it is an issue with Intel mkl?
>
> Thanks again.
>
> Best,
> Giselle
>
> On Fri, 19 Feb 2021 at 14:11, Bruno Turcksin 
> wrote:
>
>> Giselle,
>>
>> Instead of setting the library yourself can you try:
>> -DBLAS_LIBRARY_NAMES:STRING='mkl_core;mkl_sequential'  
>> -DLAPACK_LIBRARY_NAMES:STRING=mkl_intel_lp64
>>
>> Don't set BLAS/LAPACK_FOUND/LIBRARIES/LINKER_FLAGS. Let CMake find the
>> libraries and the flags it needs to use.
>>
>> Best,
>>
>> Bruno
>>
>>
>> On Thursday, February 18, 2021 at 6:46:57 PM UTC-5 gisel...@gmail.com
>> wrote:
>>
>>> Hello,
>>>
>>> I am trying to configure deal.II with LAPACK using the following command:
>>>
>>>  cmake
>>> -DCMAKE_INSTALL_PREFIX=/mnt/c/Users/Giselle/Documents/dealii_install
>>> /mnt/c/Users/Giselle/Documents/dealii -DDEAL_II_WITH_PETSC=ON
>>> -DPETSC_DIR=$PETSC_DIR -DDEAL_II_WITH_UMFPACK=ON -DUMFPACK_DIR=$PETSC_DIR
>>> -DDEAL_II_WITH_LAPACK=ON -DLAPACK_FOUND=true
>>> -DLAPACK_LIBRARIES="/home/giselle/intel/mkl/lib/intel64/libmkl_blas95_lp64.a;\
>>>
>>> /home/giselle/intel/mkl/lib/intel64/libmkl_lapack95_lp64.a;/home/giselle/intel/mkl/lib/intel64/libmkl_gf_lp64.a;\
>>>
>>> /home/giselle/intel/mkl/lib/intel64/libmkl_intel_lp64.so;/home/giselle/intel/mkl/lib/intel64/libmkl_sequential.so;\
>>> /home/giselle/intel/mkl/lib/intel64/libmkl_core.so"
>>> -DLAPACK_LINKER_FLAGS="-lgfortran -lm" -DDEAL_II_WITH_BLAS=ON
>>> -DBLAS_FOUND=true
>>> -DBLAS_LIBRARIES="/home/giselle/intel/mkl/lib/intel64/libmkl_blas95_lp64.a;\
>>>
>>> /home/giselle/intel/mkl/lib/intel64/libmkl_lapack95_lp64.a;/home/giselle/intel/mkl/lib/intel64/libmkl_gf_lp64.a;\
>>>
>>> /home/giselle/intel/mkl/lib/intel64/libmkl_intel_lp64.so;/home/giselle/intel/mkl/lib/intel64/libmkl_sequential.so;\
>>> /home/giselle/intel/mkl/lib/intel64/libmkl_core.so"
>>> -DBLAS_LINKER_FLAGS="-lgfortran -lm"
>>>
>>> I am getting the following message:
>>>
>>> Could not find the lapack library!
>>>
>>>   Could not find a sufficient BLAS/LAPACK installation:
>>>
>>>   BLAS/LAPACK symbol check failed! This usually means that your
>>> BLAS/LAPACK
>>>   installation is incomplete or the link line is broken.
>>>
>>> Attached you will also find the CMakeError.log and CMakeOutput.log
>>> files. Does anyone have any idea of what could be going on? I think it has
>>> something to do with the linker flags but I have no idea and I've tried
>>> everything I could think of.
>>>
>>> Thank you so much!
>>>
>>> Regards,
>>> Giselle
>>>
>>> --
>> The deal.II project is located at http://www.dealii.org/
>> For mailing list/forum options, see
>> https://groups.google.com/d/forum/dealii?hl=en
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "deal.II User Group" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/dealii/DlLv9b6SiBY/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> dealii+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/dealii/c45e31de-651c-4427-83ba-06dc862227b8n%40googlegroups.com
>> 
>> .
>>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/CA%2B_N%3DLMr2wF3VpmwW0x86ymjxM8VZoKdoPa0xYB6gS0%2BbHMoiA%40mail.gmail.com
> 
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAOHBfT1ijpvQKCBVRg-aopqVMzgtQ1Pa%3DtdExMshw3zUESAdfw%40mail.gmail.com.