Re: [petsc-users] Better solver and preconditioner to use multiple GPU

2023-11-09 Thread Randall Mackie
Hi Ramoni,

All EM induction methods solved numerically like finite differences are 
difficult already because of the null-space of the curl-curl equations and then 
adding air layers on top of your model also introduce another singularity. 
These have been dealt with in the past by adding in some sort of divergence 
condition. Solving the curl-curl equations with a direct solution is fine, but 
iterative solutions are difficult.

There is no easy out of the box solution to this, but you can look at using 
multi-grid as a PC but this requires special care, for example:

https://academic.oup.com/gji/article-pdf/207/3/1554/6623047/ggw352.pdf 



A good way to stabilize curl curl solutions is by explicit inclusion of 
grad-div J:

https://academic.oup.com/gji/article/216/2/906/5154929 



Good luck


Randy Mackie


> On Nov 9, 2023, at 10:54 AM, Ramoni Z. Sedano Azevedo 
>  wrote:
> 
> We are solving the Direct Problem of Controlled Source Electromagnetics 
> (CSEM) using finite difference discretization.
> 
> Em qua., 8 de nov. de 2023 às 13:22, Jed Brown  > escreveu:
> What sort of problem are you solving? Algebraic multigrid like gamg or hypre 
> are good choices for elliptic problems. Sparse triangular solves have 
> horrific efficiency even on one GPU so you generally want to do your best to 
> stay away from them.
> 
> "Ramoni Z. Sedano Azevedo"  > writes:
> 
> > Hey!
> >
> > I am using PETSC in Fortran code and we apply the MPI process to
> > parallelize the code.
> >
> > At the moment, the options that have been used are
> > -ksp_monitor_true_residual
> > -ksp_type bcgs
> > -pc_type bjacobi
> > -sub_pc_type ilu
> > -sub_pc_factor_levels 3
> > -sub_pc_factor_fill 6
> >
> > Now, we want to use multiple GPUs and I would like to know if there is a
> > better solver and preconditioner pair to apply in this case.
> >
> > Yours sincerely,
> > Ramoni Z. S . Azevedo



Re: [petsc-users] Better solver and preconditioner to use multiple GPU

2023-11-09 Thread Ramoni Z. Sedano Azevedo
We are solving the Direct Problem of Controlled Source Electromagnetics
(CSEM) using finite difference discretization.

Em qua., 8 de nov. de 2023 às 13:22, Jed Brown  escreveu:

> What sort of problem are you solving? Algebraic multigrid like gamg or
> hypre are good choices for elliptic problems. Sparse triangular solves have
> horrific efficiency even on one GPU so you generally want to do your best
> to stay away from them.
>
> "Ramoni Z. Sedano Azevedo"  writes:
>
> > Hey!
> >
> > I am using PETSC in Fortran code and we apply the MPI process to
> > parallelize the code.
> >
> > At the moment, the options that have been used are
> > -ksp_monitor_true_residual
> > -ksp_type bcgs
> > -pc_type bjacobi
> > -sub_pc_type ilu
> > -sub_pc_factor_levels 3
> > -sub_pc_factor_fill 6
> >
> > Now, we want to use multiple GPUs and I would like to know if there is a
> > better solver and preconditioner pair to apply in this case.
> >
> > Yours sincerely,
> > Ramoni Z. S . Azevedo
>


[petsc-users] Απ: PETSC breaks when using HYPRE preconditioner

2023-11-09 Thread Pantelis Moschopoulos
Barry,

I configured PETSC with --with-debugging=yes. I think this is enough to block 
any optimizations, right?
I tried both Intel and GNU compilers. The error persists.
I tried to change the preconditioner and use PILUT instead of BoomerAMG from 
Hypre. Still, the error appears.
I noticed that I do not have any problem when only one ksp is present. I am 
afraid that this is the culprit.
I will try your suggestion and setup an X Windows DISPLAY variable and send 
back the results.

Thanks for your time,
Pantelis

Από: Barry Smith 
Στάλθηκε: Πέμπτη, 9 Νοεμβρίου 2023 5:53 μμ
Προς: Pantelis Moschopoulos 
Κοιν.: petsc-users@mcs.anl.gov 
Θέμα: Re: [petsc-users] PETSC breaks when using HYPRE preconditioner

  Pantelis

   If you can set an X Windows DISPLAY variable that works you can run with 
-on_error_attach_debugger and gdb should pop up in an Xterm on MPI rank 16 
showing the code where it is crashing (based on Valgrind Address 0x0 is not 
stack'd, malloc'd or (recently) free'd there will be pointer of 0 that should 
not be). Or if the computer system has some parallel debugger you can use that 
directly. For lldb use -on_error_attach_debugger lldb

  If you have some compiler optimizations set when you ./configure PETSc you 
might try making another PETSC_ARCH without optimizations (this is PETSc's 
default when you do not use --with-debugging=0). Does it still crash with no 
optimizations?  Perhaps also try with a different compiler?

  Barry




On Nov 9, 2023, at 6:57 AM, Pantelis Moschopoulos  
wrote:

Hello everyone,

I am trying to use Petsc coupled with Hypre BoomerAMG as preconditioner in our 
in-house code to simulate the transient motion of complex fluid with finite 
elements. The problem is that after a random number of iterations, an error 
arises when the Hypre is called.

The error that I get in the terminal is the following:
[16]PETSC ERROR: 

[16]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably 
memory access out of range
[16]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[16]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and 
https://petsc.org/release/faq/
[16]PETSC ERROR: -  Stack Frames 

[16]PETSC ERROR: The line numbers in the error traceback are not always exact.
[16]PETSC ERROR: #1 Hypre solve
[16]PETSC ERROR: #2 PCApply_HYPRE() at 
/home/pmosx/Libraries/petsc/src/ksp/pc/impls/hypre/hypre.c:451
[16]PETSC ERROR: #3 PCApply() at 
/home/pmosx/Libraries/petsc/src/ksp/pc/interface/precon.c:486
[16]PETSC ERROR: #4 PCApplyBAorAB() at 
/home/pmosx/Libraries/petsc/src/ksp/pc/interface/precon.c:756
[16]PETSC ERROR: #5 KSP_PCApplyBAorAB() at 
/home/pmosx/Libraries/petsc/include/petsc/private/kspimpl.h:443
[16]PETSC ERROR: #6 KSPGMRESCycle() at 
/home/pmosx/Libraries/petsc/src/ksp/ksp/impls/gmres/gmres.c:146
[16]PETSC ERROR: #7 KSPSolve_GMRES() at 
/home/pmosx/Libraries/petsc/src/ksp/ksp/impls/gmres/gmres.c:227
[16]PETSC ERROR: #8 KSPSolve_Private() at 
/home/pmosx/Libraries/petsc/src/ksp/ksp/interface/itfunc.c:910
[16]PETSC ERROR: #9 KSPSolve() at 
/home/pmosx/Libraries/petsc/src/ksp/ksp/interface/itfunc.c:1082

On the same time, I use valgrind and when the program stops, it reports the 
following:
==1261647== Invalid read of size 8
==1261647==at 0x4841C74: _intel_fast_memcpy (in 
/usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==1261647==by 0x16231F73: hypre_GaussElimSolve (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libHYPRE-2.29.0.so)
==1261647==by 0x1622DB4F: hypre_BoomerAMGCycle (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libHYPRE-2.29.0.so)
==1261647==by 0x1620002E: hypre_BoomerAMGSolve (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libHYPRE-2.29.0.so)
==1261647==by 0x12B6F8F8: PCApply_HYPRE (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so.3.20.1)
==1261647==by 0x12C38785: PCApply (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so.3.20.1)
==1261647==by 0x12C36A39: PCApplyBAorAB (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so.3.20.1)
==1261647==by 0x126299E1: KSPGMRESCycle (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so.3.20.1)
==1261647==by 0x12628051: KSPSolve_GMRES (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so.3.20.1)
==1261647==by 0x127A532E: KSPSolve_Private (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so.3.20.1)
==1261647==by 0x127A3C8A: KSPSolve (in 

Re: [petsc-users] PETSC breaks when using HYPRE preconditioner

2023-11-09 Thread Barry Smith
  Pantelis

   If you can set an X Windows DISPLAY variable that works you can run with 
-on_error_attach_debugger and gdb should pop up in an Xterm on MPI rank 16 
showing the code where it is crashing (based on Valgrind Address 0x0 is not 
stack'd, malloc'd or (recently) free'd there will be pointer of 0 that should 
not be). Or if the computer system has some parallel debugger you can use that 
directly. For lldb use -on_error_attach_debugger lldb

  If you have some compiler optimizations set when you ./configure PETSc you 
might try making another PETSC_ARCH without optimizations (this is PETSc's 
default when you do not use --with-debugging=0). Does it still crash with no 
optimizations?  Perhaps also try with a different compiler?

  Barry




> On Nov 9, 2023, at 6:57 AM, Pantelis Moschopoulos  
> wrote:
> 
> Hello everyone,
> 
> I am trying to use Petsc coupled with Hypre BoomerAMG as preconditioner in 
> our in-house code to simulate the transient motion of complex fluid with 
> finite elements. The problem is that after a random number of iterations, an 
> error arises when the Hypre is called.
> 
> The error that I get in the terminal is the following: 
> [16]PETSC ERROR: 
> 
> [16]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, 
> probably memory access out of range
> [16]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
> [16]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and 
> https://petsc.org/release/faq/
> [16]PETSC ERROR: -  Stack Frames 
> 
> [16]PETSC ERROR: The line numbers in the error traceback are not always exact.
> [16]PETSC ERROR: #1 Hypre solve
> [16]PETSC ERROR: #2 PCApply_HYPRE() at 
> /home/pmosx/Libraries/petsc/src/ksp/pc/impls/hypre/hypre.c:451
> [16]PETSC ERROR: #3 PCApply() at 
> /home/pmosx/Libraries/petsc/src/ksp/pc/interface/precon.c:486
> [16]PETSC ERROR: #4 PCApplyBAorAB() at 
> /home/pmosx/Libraries/petsc/src/ksp/pc/interface/precon.c:756
> [16]PETSC ERROR: #5 KSP_PCApplyBAorAB() at 
> /home/pmosx/Libraries/petsc/include/petsc/private/kspimpl.h:443
> [16]PETSC ERROR: #6 KSPGMRESCycle() at 
> /home/pmosx/Libraries/petsc/src/ksp/ksp/impls/gmres/gmres.c:146
> [16]PETSC ERROR: #7 KSPSolve_GMRES() at 
> /home/pmosx/Libraries/petsc/src/ksp/ksp/impls/gmres/gmres.c:227
> [16]PETSC ERROR: #8 KSPSolve_Private() at 
> /home/pmosx/Libraries/petsc/src/ksp/ksp/interface/itfunc.c:910
> [16]PETSC ERROR: #9 KSPSolve() at 
> /home/pmosx/Libraries/petsc/src/ksp/ksp/interface/itfunc.c:1082
> 
> On the same time, I use valgrind and when the program stops, it reports the 
> following:
> ==1261647== Invalid read of size 8
> ==1261647==at 0x4841C74: _intel_fast_memcpy (in 
> /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so 
> )
> ==1261647==by 0x16231F73: hypre_GaussElimSolve (in 
> /home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libHYPRE-2.29.0.so 
> )
> ==1261647==by 0x1622DB4F: hypre_BoomerAMGCycle (in 
> /home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libHYPRE-2.29.0.so 
> )
> ==1261647==by 0x1620002E: hypre_BoomerAMGSolve (in 
> /home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libHYPRE-2.29.0.so 
> )
> ==1261647==by 0x12B6F8F8: PCApply_HYPRE (in 
> /home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so 
> .3.20.1)
> ==1261647==by 0x12C38785: PCApply (in 
> /home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so 
> .3.20.1)
> ==1261647==by 0x12C36A39: PCApplyBAorAB (in 
> /home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so 
> .3.20.1)
> ==1261647==by 0x126299E1: KSPGMRESCycle (in 
> /home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so 
> .3.20.1)
> ==1261647==by 0x12628051: KSPSolve_GMRES (in 
> /home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so 
> .3.20.1)
> ==1261647==by 0x127A532E: KSPSolve_Private (in 
> /home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so 
> .3.20.1)
> ==1261647==by 0x127A3C8A: KSPSolve (in 
> /home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so 
> .3.20.1)
> ==1261647==by 0x12C50AF1: kspsolve_ (in 
> /home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so 
> .3.20.1)
> ==1261647==  Address 0x0 is not stack'd, malloc'd or (recently) free'd
> ==1261647==
> 
> This is indeed a very peculiar error. I cannot understand why it happens. In 
> our solution procedure, we split the equations and we solve them segregated. 
> I create two different ksp (1_ksp and 2_ksp) using KSPSetOptionsPrefix. Might 
> this choice create a confusion and results in this 

Re: [petsc-users] Storing Values using a Triplet for using later

2023-11-09 Thread Brandon Denton via petsc-users
Good Morning,

Thank you Matt, Jed, and Barry. I will looking into each of these suggestions a 
report back.

-Brandon

From: Matthew Knepley 
Sent: Wednesday, November 8, 2023 4:18 PM
To: Brandon Denton 
Cc: petsc-users@mcs.anl.gov 
Subject: Re: [petsc-users] Storing Values using a Triplet for using later

On Wed, Nov 8, 2023 at 2:40 PM Brandon Denton via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Good Afternoon,

Is there a structure within PETSc that allows storage of a value using a triple 
similar to PetscHMapIJSet with the key using a struct{PetscScalar i, j, k;}?

I'm trying to access mesh information (the shape function coefficients I will 
calculate prior to their use) who's values I want to store in the auxiliary 
array available in the Residual Functions of PETSc's FEM infrastructure. After 
some trial and error work, I've come to the realization that the coordinates 
(x[]) available in the auxiliary functions is the centroid of the cell/element 
currently being evaluated. This triplet is unique for each cell/element for a 
valid mesh so I think it's reasonable to use this triplet as a key for looking 
up stored values unique to each cell/element. My plan is to attached the map to 
the Application Context, also available to Auxiliary Functions, to enable these 
calculations.

Does such a map infrastructure exist within PETSc? If so, could you point me to 
a reference for it? If not, does anyone have any suggestions on how to solve 
this problem?

As Jed says, this is a spatial hash. I have a primitive spatial hash now. You 
can use DMLocatePoints() to find the cell containing a point (like the 
centroid). Let me know if this does not work or if I misunderstand the problem.

  Thanks!

Matt

Thank you in advance for your time.
Brandon Denton



--
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/


[petsc-users] PETSC breaks when using HYPRE preconditioner

2023-11-09 Thread Pantelis Moschopoulos
Hello everyone,

I am trying to use Petsc coupled with Hypre BoomerAMG as preconditioner in our 
in-house code to simulate the transient motion of complex fluid with finite 
elements. The problem is that after a random number of iterations, an error 
arises when the Hypre is called.

The error that I get in the terminal is the following:
[16]PETSC ERROR: 

[16]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation, probably 
memory access out of range
[16]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[16]PETSC ERROR: or see https://petsc.org/release/faq/#valgrind and 
https://petsc.org/release/faq/
[16]PETSC ERROR: -  Stack Frames 

[16]PETSC ERROR: The line numbers in the error traceback are not always exact.
[16]PETSC ERROR: #1 Hypre solve
[16]PETSC ERROR: #2 PCApply_HYPRE() at 
/home/pmosx/Libraries/petsc/src/ksp/pc/impls/hypre/hypre.c:451
[16]PETSC ERROR: #3 PCApply() at 
/home/pmosx/Libraries/petsc/src/ksp/pc/interface/precon.c:486
[16]PETSC ERROR: #4 PCApplyBAorAB() at 
/home/pmosx/Libraries/petsc/src/ksp/pc/interface/precon.c:756
[16]PETSC ERROR: #5 KSP_PCApplyBAorAB() at 
/home/pmosx/Libraries/petsc/include/petsc/private/kspimpl.h:443
[16]PETSC ERROR: #6 KSPGMRESCycle() at 
/home/pmosx/Libraries/petsc/src/ksp/ksp/impls/gmres/gmres.c:146
[16]PETSC ERROR: #7 KSPSolve_GMRES() at 
/home/pmosx/Libraries/petsc/src/ksp/ksp/impls/gmres/gmres.c:227
[16]PETSC ERROR: #8 KSPSolve_Private() at 
/home/pmosx/Libraries/petsc/src/ksp/ksp/interface/itfunc.c:910
[16]PETSC ERROR: #9 KSPSolve() at 
/home/pmosx/Libraries/petsc/src/ksp/ksp/interface/itfunc.c:1082

On the same time, I use valgrind and when the program stops, it reports the 
following:
==1261647== Invalid read of size 8
==1261647==at 0x4841C74: _intel_fast_memcpy (in 
/usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==1261647==by 0x16231F73: hypre_GaussElimSolve (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libHYPRE-2.29.0.so)
==1261647==by 0x1622DB4F: hypre_BoomerAMGCycle (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libHYPRE-2.29.0.so)
==1261647==by 0x1620002E: hypre_BoomerAMGSolve (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libHYPRE-2.29.0.so)
==1261647==by 0x12B6F8F8: PCApply_HYPRE (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so.3.20.1)
==1261647==by 0x12C38785: PCApply (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so.3.20.1)
==1261647==by 0x12C36A39: PCApplyBAorAB (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so.3.20.1)
==1261647==by 0x126299E1: KSPGMRESCycle (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so.3.20.1)
==1261647==by 0x12628051: KSPSolve_GMRES (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so.3.20.1)
==1261647==by 0x127A532E: KSPSolve_Private (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so.3.20.1)
==1261647==by 0x127A3C8A: KSPSolve (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so.3.20.1)
==1261647==by 0x12C50AF1: kspsolve_ (in 
/home/pmosx/Libraries/PETSC_INS_DIR_INTELDebug/lib/libpetsc.so.3.20.1)
==1261647==  Address 0x0 is not stack'd, malloc'd or (recently) free'd
==1261647==

This is indeed a very peculiar error. I cannot understand why it happens. In 
our solution procedure, we split the equations and we solve them segregated. I 
create two different ksp (1_ksp and 2_ksp) using KSPSetOptionsPrefix. Might 
this choice create a confusion and results in this error?

Any help is much appreciated.

Pantelis


Re: [petsc-users] [BULK] Re: DMPlex and Gmsh

2023-11-09 Thread Sharan Roongta
Hello,
 
Thank you for the response. I shall try it out.
 
Regards,
Sharan
 
From: Matthew Knepley [mailto:knep...@gmail.com] 
Sent: Wednesday, 8 November 2023 23:09
To: Blaise Bourdin 
Cc: Sharan Roongta ; petsc-users@mcs.anl.gov
Subject: [BULK] Re: [petsc-users] DMPlex and Gmsh
Importance: Low
 
On Wed, Nov 8, 2023 at 4:50 PM Blaise Bourdin  wrote:
Hi, 
 
I think that you need to use the magical keyword “-dm_plex_gmsh_mark_vertices” 
for that
 
I try to describe the options here:
 
  https://petsc.org/main/manualpages/DMPlex/DMPlexCreateGmsh/
 
   Thanks,
 
  Matt
 
Blaise


On Nov 8, 2023, at 1:13 PM, Sharan Roongta  wrote:
 
  
Caution: External email. 
 
Dear Petsc team,
 
I want to load a .msh file generated using Gmsh software into the DMPlex 
object. There are several things I would want to clarify, but I would like to 
start with “Physical tags”.
 
If I have defined “Physical Points”, “Physical Surface”, and “Physical Volume” 
in my .geo file, I get the physical tags in the “.msh” file.
When I load this mesh in DMPlex, and view the DM:
 
call DMView(globalMesh, PETSC_VIEWER_STDOUT_WORLD,err_PETSc)
  CHKERRQ(err_PETSc)
 
This is the output I get:
 
DM Object: n/a 1 MPI process
  type: plex
n/a in 3 dimensions:
  Number of 0-cells per rank: 14
  Number of 1-cells per rank: 49
  Number of 2-cells per rank: 60
  Number of 3-cells per rank: 24
Labels:
  celltype: 4 strata with value/size (0 (14), 6 (24), 3 (60), 1 (49))
  depth: 4 strata with value/size (0 (14), 1 (49), 2 (60), 3 (24))
  Cell Sets: 1 strata with value/size (8 (24))
  Face Sets: 6 strata with value/size (2 (4), 3 (4), 4 (4), 5 (4), 6 (4), 7 (4))

I was expecting to get the “Node Sets” or “Vertex Sets” also. Is my assumption 
wrong?

If yes, then how can one figure out the boundary nodes and their tags where I 
want to apply certain boundary conditions?
Currently we apply boundary conditions on faces, therefore “Face Sets” was 
enough. But now we want to apply displacements on certain boundary nodes.
 
I have also attached the .geo and .msh file (hope you can open it)
The Petsc version I am using is 3.18.6.


Thanks and Regards,
Sharan Roongta
 
---
Max-Planck-Institut für Eisenforschung GmbH 
Max-Planck-Straße 1 
D-40237 Düsseldorf 

Handelsregister B 2533 
Amtsgericht Düsseldorf 
   
Geschäftsführung 
Prof. Dr. Gerhard Dehm 
Prof. Dr. Jörg Neugebauer 
Prof. Dr. Dierk Raabe 
Dr. Kai de Weldige 

Ust.-Id.-Nr.: DE 11 93 58 514 
Steuernummer: 105 5891 1000 
- 
Please consider that invitations and e-mails of our institute are 
only valid if they end with …@mpie.de. 
If you are not sure of the validity please contact r...@mpie.de

Bitte beachten Sie, dass Einladungen zu Veranstaltungen und E-Mails 
aus unserem Haus nur mit der Endung …@mpie.de gültig sind. 
In Zweifelsfällen wenden Sie sich bitte an r...@mpie.de
 
 


-
Stay up to date and follow us on LinkedIn, Twitter and YouTube.

Max-Planck-Institut für Eisenforschung GmbH
Max-Planck-Straße 1
D-40237 Düsseldorf
 
Handelsregister B 2533 
Amtsgericht Düsseldorf
 
Geschäftsführung
Prof. Dr. Gerhard Dehm
Prof. Dr. Jörg Neugebauer
Prof. Dr. Dierk Raabe
Dr. Kai de Weldige
 
Ust.-Id.-Nr.: DE 11 93 58 514 
Steuernummer: 105 5891 1000


Please consider that invitations and e-mails of our institute are 
only valid if they end with …@mpie.de. 
If you are not sure of the validity please contact r...@mpie.de

Bitte beachten Sie, dass Einladungen zu Veranstaltungen und E-Mails
aus unserem Haus nur mit der Endung …@mpie.de gültig sind. 
In Zweifelsfällen wenden Sie sich bitte an r...@mpie.de
-

 
— 
Canada Research Chair in Mathematical and Computational Aspects of Solid 
Mechanics (Tier 1)
Professor, Department of Mathematics & Statistics
Hamilton Hall room 409A, McMaster University
1280 Main Street West, Hamilton, Ontario L8S 4K1, Canada 
https://www.math.mcmaster.ca/bourdin | +1 (905) 525 9140 ext. 27243
 

 
-- 
What most experimenters take for granted before they begin their experiments is 
infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener
 
https://www.cse.buffalo.edu/~knepley/

-
Stay up to date and follow us on LinkedIn, Twitter and YouTube.

Max-Planck-Institut für Eisenforschung GmbH
Max-Planck-Straße 1
D-40237 Düsseldorf
 
Handelsregister B 2533 
Amtsgericht Düsseldorf
 
Geschäftsführung
Prof. Dr. Gerhard Dehm
Prof. Dr. Jörg Neugebauer
Prof. Dr. Dierk Raabe
Dr. Kai de Weldige
 
Ust.-Id.-Nr.: DE 11 93 58 514 
Steuernummer: 105 5891 1000


Please consider that invitations and e-mails of our institute are 
only valid if they end with …@mpie.de. 
If you are not sure of the validity please contact r...@mpie.de

Bitte beachten Sie, dass Einladungen zu Veranstaltungen und E-Mails
aus unserem Haus nur mit der Endung