Sorry for lack of explanation.

I  build getfem on ubuntu:20.04 and  using the configuration command
" --with-pic --enable-paralevel=2"

I am using...
- automake
- libtool
- make
- g++
- libqd-dev
- libqhull-dev
- libmumps-dev
- liblapack-dev
- libopenblas-dev
- libpython3-dev
- gfortran
- libmetis-dev

I attach a Dockerfile and a Python file to reproduce.
You can reproduce by the following command.

$ sudo docker build -t demo_parallel_laplacian_nonlinear_term.py .

Best Regards
Tetsuo

2021年5月19日(水) 23:12 Konstantinos Poulios <[email protected]>:

> I think the instructions page is correct. What distribution do you build
> getfem on and what is your configuration command?
> Best regards
> Kostas
>
> On Wed, May 19, 2021 at 10:44 AM Tetsuo Koyama <[email protected]>
> wrote:
>
>> Dear Kostas
>>
>> This page (http://getfem.org/tutorial/install.html) says
>> - Parallel MUMPS, METIS and MPI4PY packages if you want to use the MPI
>> parallelized version of GetFEM.
>>
>> Is there a recommended way to install Parallel Parallel MUMPS, METIS and
>> MPI4PY ?
>> I could not find the information in the page.
>>
>> If you could give me any information I will add it to the following page.
>> http://getfem.org/install/install_linux.html
>>
>> BR
>> Tetsuo
>>
>> 2021年5月19日(水) 10:45 Tetsuo Koyama <[email protected]>:
>>
>>> Dear Kostast
>>>
>>> No I haven't. I am using libmumps-seq-dev of Ubuntu repository.
>>> I will use parallel version of mumps again.
>>>
>>> BR
>>> Tetsuo
>>>
>>> 2021年5月19日(水) 4:50 Konstantinos Poulios <[email protected]>:
>>>
>>>> Dear Tetsuo,
>>>>
>>>> Have you compiled GetFEM with the parallel version of mumps? In
>>>> Ubuntu/Debian you must link to dmumps instead of dmumps_seq for example.
>>>>
>>>> BR
>>>> Kostast
>>>>
>>>> On Tue, May 18, 2021 at 2:09 PM Tetsuo Koyama <[email protected]>
>>>> wrote:
>>>>
>>>>> Dear Kostas
>>>>>
>>>>> Thank you for your report.
>>>>> I am happy that it runs well in your system.
>>>>> I will organize the procedure that can reproduce this error. Please
>>>>> wait.
>>>>>
>>>>> Best Regards Tetsuo
>>>>>
>>>>> 2021年5月18日(火) 18:10 Konstantinos Poulios <[email protected]>:
>>>>>
>>>>>> Dear Tetsuo,
>>>>>> I could not confirm this issue. On my system the example runs well
>>>>>> both on 1 and 2 processes (it doesn't scale well though)
>>>>>> BR
>>>>>> Kostas
>>>>>>
>>>>>> [image: image.png]
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Sun, May 16, 2021 at 10:07 AM Tetsuo Koyama <[email protected]>
>>>>>> wrote:
>>>>>>
>>>>>>> Dear Kostas
>>>>>>>
>>>>>>> I am looking inside the source code.
>>>>>>> > if (generic_expressions.size()) {...}
>>>>>>> Sorry it looks complex for me.
>>>>>>>
>>>>>>> FYI. I found that MPI process 1 and 2 is different in the following
>>>>>>> line.
>>>>>>> >    if (iter.finished(crit)) {
>>>>>>> This is in the "Newton_with_step_control" function in
>>>>>>> getfem_model_solvers.h.
>>>>>>>
>>>>>>> "crit" is calculated by rit = res / approx_eln and res and
>>>>>>> approx_eln is ...
>>>>>>>
>>>>>>> $ mpirun -n 1 python demo_parallel_laplacian.py
>>>>>>> res=1.31449e-11
>>>>>>> approx_eln=6.10757
>>>>>>> crit=2.15222e-12
>>>>>>>
>>>>>>> $ mpirun -n 2 python demo_parallel_laplacian.py
>>>>>>> res=6.02926
>>>>>>> approx_eln=12.2151
>>>>>>> crit=0.493588
>>>>>>>
>>>>>>> res=0.135744
>>>>>>> approx_eln=12.2151
>>>>>>> crit=0.0111128
>>>>>>>
>>>>>>> I am now trying to understand what is the correct residual value of
>>>>>>>  Newton(-Raphson) algorithm.
>>>>>>> I will be glad if you have an opinion.
>>>>>>>
>>>>>>> Best Regards Tetsuo
>>>>>>> 2021年5月11日(火) 19:28 Tetsuo Koyama <[email protected]>:
>>>>>>>
>>>>>>>> Dear Kostas
>>>>>>>>
>>>>>>>> > The relevant code is in the void model::assembly function in
>>>>>>>> getfem_models.cc. The relevant code assembling the term you add with
>>>>>>>> md.add_nonlinear_term(..) must be executed inside the if condition
>>>>>>>> >
>>>>>>>> > if (generic_expressions.size()) {...}
>>>>>>>> > You can have a look there and ask for further help if it looks
>>>>>>>> too complex. You should also check if the test works when you run it 
>>>>>>>> with
>>>>>>>> md.add_nonlinear_term but setting the number of MPI processes to one.
>>>>>>>>
>>>>>>>> Thanks. I will check it. And the following command completed
>>>>>>>> successfully..
>>>>>>>>
>>>>>>>> $ mpirun -n 1 python demo_parallel_laplacian.py
>>>>>>>>
>>>>>>>> So all we have to check is compare -n 1 with -n2 .
>>>>>>>>
>>>>>>>> Best regards Tetsuo
>>>>>>>>
>>>>>>>> 2021年5月11日(火) 18:44 Konstantinos Poulios <[email protected]>:
>>>>>>>>
>>>>>>>>> Dear Tetsuo,
>>>>>>>>>
>>>>>>>>> The relevant code is in the void model::assembly function in
>>>>>>>>> getfem_models.cc. The relevant code assembling the term you add with
>>>>>>>>> md.add_nonlinear_term(..) must be executed inside the if condition
>>>>>>>>>
>>>>>>>>> if (generic_expressions.size()) {...}
>>>>>>>>>
>>>>>>>>> You can have a look there and ask for further help if it looks too
>>>>>>>>> complex. You should also check if the test works when you run it with
>>>>>>>>> md.add_nonlinear_term but setting the number of MPI processes to one.
>>>>>>>>>
>>>>>>>>> BR
>>>>>>>>> Kostas
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, May 11, 2021 at 10:44 AM Tetsuo Koyama <
>>>>>>>>> [email protected]> wrote:
>>>>>>>>>
>>>>>>>>>> Dear Kostas
>>>>>>>>>>
>>>>>>>>>> Thank you for your reply.
>>>>>>>>>>
>>>>>>>>>> > Interesting. In order to isolate the issue, can you also check
>>>>>>>>>> with
>>>>>>>>>> > md.add_linear_term(..)
>>>>>>>>>> > ?
>>>>>>>>>> It ends when using md.add_linear_term(..).
>>>>>>>>>> It seems that it is a problem of md.add_nonlinear_term(..).
>>>>>>>>>> Is there a point which I can check?
>>>>>>>>>>
>>>>>>>>>> Best regards Tetsuo.
>>>>>>>>>>
>>>>>>>>>> 2021年5月11日(火) 17:19 Konstantinos Poulios <[email protected]
>>>>>>>>>> >:
>>>>>>>>>>
>>>>>>>>>>> Dear Tetsuo,
>>>>>>>>>>>
>>>>>>>>>>> Interesting. In order to isolate the issue, can you also check
>>>>>>>>>>> with
>>>>>>>>>>> md.add_linear_term(..)
>>>>>>>>>>> ?
>>>>>>>>>>>
>>>>>>>>>>> Best regards
>>>>>>>>>>> Kostas
>>>>>>>>>>>
>>>>>>>>>>> On Tue, May 11, 2021 at 12:22 AM Tetsuo Koyama <
>>>>>>>>>>> [email protected]> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> Dear GetFEM community
>>>>>>>>>>>>
>>>>>>>>>>>> I am running MPI Parallelization of GetFEM.The running command
>>>>>>>>>>>> is
>>>>>>>>>>>>
>>>>>>>>>>>> $ git clone https://git.savannah.nongnu.org/git/getfem.git
>>>>>>>>>>>> $ cd getfem
>>>>>>>>>>>> $ bash autogen.sh
>>>>>>>>>>>> $ ./configure --with-pic --enable-paralevel=2
>>>>>>>>>>>> $ make
>>>>>>>>>>>> $ make install
>>>>>>>>>>>> $ mpirun -n 2 python demo_parallel_laplacian.py
>>>>>>>>>>>>
>>>>>>>>>>>> The python script ends correctly. But when I changed the
>>>>>>>>>>>> following linear term to nonlinear term the script did not end.
>>>>>>>>>>>>
>>>>>>>>>>>> -md.add_Laplacian_brick(mim, 'u')
>>>>>>>>>>>> +md.add_nonlinear_term(mim, "Grad_u.Grad_Test_u")
>>>>>>>>>>>>
>>>>>>>>>>>> Do you know the reason?
>>>>>>>>>>>> Best regards Tetsuo
>>>>>>>>>>>>
>>>>>>>>>>>

Attachment: Dockerfile
Description: Binary data

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Python GetFEM interface
#
# Copyright (C) 2004-2020 Yves Renard, Julien Pommier.
#
# This file is a part of GetFEM
#
# GetFEM  is  free software;  you  can  redistribute  it  and/or modify it
# under  the  terms  of the  GNU  Lesser General Public License as published
# by  the  Free Software Foundation;  either version 2.1 of the License,  or
# (at your option) any later version.
# This program  is  distributed  in  the  hope  that it will be useful,  but
# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
# or  FITNESS  FOR  A PARTICULAR PURPOSE.  See the GNU Lesser General Public
# License for more details.
# You  should  have received a copy of the GNU Lesser General Public License
# along  with  this program;  if not, write to the Free Software Foundation,
# Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301, USA.
#
############################################################################
"""  2D Poisson problem test.

  This program is used to check that python-getfem is working in parallel.
  This is also a good example of use of GetFEM.

  Run this script by invoking
  mpiexec -n 4 python demo_parallel_laplacian.py

  $Id: demo_parallel_laplacian.py 3809 2011-09-26 20:38:56Z logari81 $
"""
import time

import numpy as np

import getfem as gf
# import basic modules
import mpi4py.MPI as mpi

rank = mpi.COMM_WORLD.rank
if (rank == 0):
  print('Running Parallel Getfem with python interface')

print('Hello from thread ', rank)

## Parameters
NX = 100                           # Mesh parameter.
Dirichlet_with_multipliers = True  # Dirichlet condition with multipliers
                                   # or penalization
dirichlet_coefficient = 1e10       # Penalization coefficient

t = time.process_time()

# creation of a simple cartesian mesh
m = gf.Mesh('regular_simplices', np.arange(0,1+1./NX,1./NX), np.arange(0,1+1./NX,1./NX))
if (rank == 0):
  print('Time for building mesh', time.process_time()-t)
t = time.process_time()

# create a MeshFem for u and rhs fields of dimension 1 (i.e. a scalar field)
mfu   = gf.MeshFem(m, 1)
mfrhs = gf.MeshFem(m, 1)
# assign the P2 fem to all convexes of the both MeshFem
mfu.set_fem(gf.Fem('FEM_PK(2,2)'))
mfrhs.set_fem(gf.Fem('FEM_PK(2,2)'))

# an exact integration will be used
mim = gf.MeshIm(m, gf.Integ('IM_TRIANGLE(4)'))

# boundary selection
flst  = m.outer_faces()
fnor  = m.normal_of_faces(flst)
tleft = abs(fnor[1,:]+1) < 1e-14
ttop  = abs(fnor[0,:]-1) < 1e-14
fleft = np.compress(tleft, flst, axis=1)
ftop  = np.compress(ttop, flst, axis=1)
fneum = np.compress(np.logical_not(ttop + tleft), flst, axis=1)

# mark it as boundary
DIRICHLET_BOUNDARY_NUM1 = 1
DIRICHLET_BOUNDARY_NUM2 = 2
NEUMANN_BOUNDARY_NUM = 3
m.set_region(DIRICHLET_BOUNDARY_NUM1, fleft)
m.set_region(DIRICHLET_BOUNDARY_NUM2, ftop)
m.set_region(NEUMANN_BOUNDARY_NUM, fneum)

if (rank == 0):
  print('Time for building fem and im', time.process_time()-t)
t = time.process_time()

nb_dof = mfu.nbdof()
if (rank == 0):
  print('Nb dof for the main unknown: ', nb_dof)

if (rank == 0):
  print('Time for dof numbering', time.process_time()-t)
t = time.process_time()
  

# interpolate the exact solution (Assuming mfu is a Lagrange fem)
Ue = mfu.eval('y*(y-1)*x*(x-1)+x*x*x*x*x')

# interpolate the source term
F1 = mfrhs.eval('-(2*(x*x+y*y)-2*x-2*y+20*x*x*x)')
F2 = mfrhs.eval('[y*(y-1)*(2*x-1) + 5*x*x*x*x, x*(x-1)*(2*y-1)]')

if (rank == 0):
  print('Time for python interpolation', time.process_time()-t)
t = time.process_time()

# model
md = gf.Model('real')

# main unknown
md.add_fem_variable('u', mfu)

# laplacian term on u
md.add_nonlinear_term(mim, "Grad_u.Grad_Test_u")
# md.add_linear_term(mim, "Grad_u.Grad_Test_u")

# volumic source term
md.add_initialized_fem_data('VolumicData', mfrhs, F1)
md.add_source_term_brick(mim, 'u', 'VolumicData')

# Neumann condition.
md.add_initialized_fem_data('NeumannData', mfrhs, F2)
md.add_normal_source_term_brick(mim, 'u', 'NeumannData',
                                NEUMANN_BOUNDARY_NUM)

# Dirichlet condition on the left.
md.add_initialized_fem_data("DirichletData", mfu, Ue)

if (Dirichlet_with_multipliers):
  md.add_Dirichlet_condition_with_multipliers(mim, 'u', mfu,
                                              DIRICHLET_BOUNDARY_NUM1,
                                              'DirichletData')
else:
  md.add_Dirichlet_condition_with_penalization(mim, 'u', dirichlet_coefficient,
                                               DIRICHLET_BOUNDARY_NUM1,
                                               'DirichletData')

# Dirichlet condition on the top.
# Two Dirichlet brick in order to test the multiplier
# selection in the intersection.
if (Dirichlet_with_multipliers):
  md.add_Dirichlet_condition_with_multipliers(mim, 'u', mfu,
                                              DIRICHLET_BOUNDARY_NUM2,
                                              'DirichletData')
else:
  md.add_Dirichlet_condition_with_penalization(mim, 'u', dirichlet_coefficient,
                                               DIRICHLET_BOUNDARY_NUM2,
                                               'DirichletData')

if (rank == 0):
  print('Time for model building', time.process_time()-t)
t = time.process_time()

md.nbdof
nb_dof = md.nbdof()
if (rank == 0):
  print('Nb dof for the model: ', nb_dof)

if (rank == 0):
  print('Time for model actualize sizes', time.process_time()-t)
t = time.process_time()

# assembly of the linear system and solve.
md.solve()

if (rank == 0):
  print('Time for model solve', time.process_time()-t)
t = time.process_time()

# main unknown
U = md.variable('u')
L2error = gf.compute(mfu, U-Ue, 'L2 norm', mim)
H1error = gf.compute(mfu, U-Ue, 'H1 norm', mim)

if (rank == 0):
  print('Error in L2 norm : ', L2error)
  print('Error in H1 norm : ', H1error)

if (rank == 0):
  print('Time for error computation', time.process_time()-t)
t = time.process_time()


# export data
# if (rank == 0):
#   mfu.export_to_pos('laplacian.pos', Ue,'Exact solution',
#                     U,'Computed solution')
#   print('You can view the solution with (for example):')
#   print('gmsh laplacian.pos')

Reply via email to