Re: [deal.II] dealii installation issue on NERSC Cori

2020-10-24 Thread Aaditya Lakshmanan
Hi Timo, 
I tried another compilation and installation of deal.ii without PETSC, 
P4EST and LAPACK(by setting the corresponding 
DEAL_II_WITH_PACKAGENAME=OFF). After successfully completing installation I 
tried the examples step-1 and step-2 with *cmake . ; make ; make run*, and 
both of them ran successfully. However, switching back to the build 
directory and running *make test* resulted in failure of all quick tests. 
Seems like there might then be some issue with the LAPACK, P4EST or PETSC 
installation I am using. I will try the installation process with different 
subsets of these packages and try to pinpoint the issue. What do you think?

Best,
Aaditya 
   

On Saturday, October 24, 2020 at 5:43:50 PM UTC-4 Aaditya Lakshmanan wrote:

> Hi Timo,
> Thank you for your suggestion. I repeated the installation process and 
> this time I didn't run the quick_tests. Rather I changed to examples/step-1 
> directory in the deal.ii installation and executed the following :
>
> *cmake .  *
> *make *
>
> which exited with the following error :
>
> *Scanning dependencies of target step-1*
> *[ 50%] Building CXX object CMakeFiles/step-1.dir/step-1.cc.o*
> *[100%] Linking CXX executable step-1*
> */usr/bin/ld: 
> /global/project/projectdirs/m2360/packagesCPFE/dealii_install/lib/libdeal_II.g.so
>  
> : file not recognized: file truncated*
> *make[2]: *** [CMakeFiles/step-1.dir/build.make:95: step-1] Error 1*
> *make[1]: *** [CMakeFiles/Makefile2:137: CMakeFiles/step-1.dir/all] Error 
> 2*
> *make: *** [Makefile:84: all] Error 2*
>
>
> I am not sure why this is the case since installation completed without 
> issues. I have attached the files *detailed.log* and *make_output.txt*(output 
> from *make -j8 install* on the KNL compute node).
>
> Best,
> Aaditya
>
>
> On Saturday, October 24, 2020 at 10:38:49 AM UTC-4 Timo Heister wrote:
>
>> Running the tests seems to have triggered a rebuild of deal.iII. Can you 
>> try running some of the tutorial steps instead? 
>>
>> On Fri, Oct 23, 2020, 17:05 Aaditya Lakshmanan  
>> wrote:
>>
>>> Hi Wolfgang,
>>>I tried compiling, installing and running the deal.ii tests 
>>> interactively on a KNL compute node. Both compilation and installation were 
>>> seemingly successful but all the tests failed. I have attached the output 
>>> from make test in the file *test_output.txt*. An example error looks 
>>> like the following :
>>> 2/11 Test  #1: step.debug ...***Failed  265.99 sec
>>> gmake[6]: *** write jobserver: Invalid argument.  Stop.
>>> gmake[6]: *** Waiting for unfinished jobs
>>> gmake[6]: *** write jobserver: Invalid argument.  Stop.
>>> gmake[5]: *** [CMakeFiles/Makefile2:10829: 
>>> tests/quick_tests/CMakeFiles/step.debug.run.dir/rule] Error 2
>>> gmake[4]: *** [Makefile:3700: step.debug.run] Error 2
>>> Test step.debug: BUILD
>>> ===   OUTPUT BEGIN  
>>> ===
>>> step.debug: BUILD failed. Output:
>>> [  0%] Built target obj_sundials_inst
>>> [ 21%] Built target obj_physics_elasticity_inst
>>> [ 30%] Built target obj_dofs_inst
>>> [ 25%] Built target obj_matrix_free_inst
>>> [ 23%] Built target obj_amd_global_debug
>>> [ 34%] Built target obj_umfpack_DI_TRIPLET_MAP_NOX_debug
>>> [ 30%] Built target obj_differentiation_ad_inst
>>> [ 34%] Built target obj_multigrid_inst
>>> [ 59%] Built target obj_physics_inst
>>> [ 59%] Built target obj_non_matching_inst
>>> [ 59%] Built target obj_gmsh_inst
>>> [ 34%] Built target obj_distributed_inst
>>> [ 34%] Built target kill-step.debug-OK
>>> [ 34%] Built target obj_muparser_debug
>>> [ 34%] Built target obj_umfpack_DL_SOLVE_debug
>>> [ 59%] Built target obj_numerics_inst
>>> [ 59%] Built target obj_boost_system_debug
>>> [ 59%] Built target obj_umfpack_DL_STORE_debug
>>> [ 59%] Built target obj_umfpack_DL_TRIPLET_MAP_NOX_debug
>>> [ 59%] Built target obj_umfpack_DL_TRIPLET_MAP_X_debug
>>> [ 59%] Built target obj_umfpack_DI_TSOLVE_debug
>>> [ 78%] Built target obj_base_inst
>>> [ 78%] Built target obj_grid_inst
>>> [ 59%] Built target obj_umfpack_DI_SOLVE_debug
>>> [ 34%] Built target obj_amd_int_debug
>>> [ 59%] Built target obj_algorithms_inst
>>> [ 59%] Built target obj_meshworker_inst
>>> [ 59%] Built target obj_hp_inst
>>> [ 78%] Built target obj_particle_inst
>>> [ 59%] Built target obj_differentiation_sd_inst
>>> [ 59%] Built target obj_opencascade_inst
>>> [ 78%] Built target obj_lac_inst
>>> [ 59%] Built target obj_amd_long_debug
>>> [ 59%] Built target obj_umfpack_DL_TSOLVE_debug
>>> [ 59%] Built target obj_umfpack_DI_STORE_debug
>>> [ 59%] Built target obj_umfpack_DL_TRIPLET_NOMAP_X_debug
>>> [ 59%] Built target obj_umfpack_DI_TRIPLET_MAP_X_debug
>>> [ 59%] Built target obj_umfpack_DI_TRIPLET_NOMAP_X_debug
>>> [ 59%] Built target obj_umfpack_GENERIC_debug
>>> [ 78%] Built target obj_umfpack_DL_ASSEMBLE_debug
>>> [ 59%] Built target obj_umfpack_DI_ASSEMBLE_debug
>>> [ 84%] Built target obj_fe_inst
>>>
>>>

Re: [deal.II] dealii installation issue on NERSC Cori

2020-10-24 Thread Aaditya Lakshmanan
Hi Timo,
Thank you for your suggestion. I repeated the installation process and 
this time I didn't run the quick_tests. Rather I changed to examples/step-1 
directory in the deal.ii installation and executed the following :

*cmake .  *
*make *

which exited with the following error :

*Scanning dependencies of target step-1*
*[ 50%] Building CXX object CMakeFiles/step-1.dir/step-1.cc.o*
*[100%] Linking CXX executable step-1*
*/usr/bin/ld: 
/global/project/projectdirs/m2360/packagesCPFE/dealii_install/lib/libdeal_II.g.so:
 
file not recognized: file truncated*
*make[2]: *** [CMakeFiles/step-1.dir/build.make:95: step-1] Error 1*
*make[1]: *** [CMakeFiles/Makefile2:137: CMakeFiles/step-1.dir/all] Error 2*
*make: *** [Makefile:84: all] Error 2*


I am not sure why this is the case since installation completed without 
issues. I have attached the files *detailed.log* and *make_output.txt*(output 
from *make -j8 install* on the KNL compute node).

Best,
Aaditya


On Saturday, October 24, 2020 at 10:38:49 AM UTC-4 Timo Heister wrote:

> Running the tests seems to have triggered a rebuild of deal.iII. Can you 
> try running some of the tutorial steps instead? 
>
> On Fri, Oct 23, 2020, 17:05 Aaditya Lakshmanan  wrote:
>
>> Hi Wolfgang,
>>I tried compiling, installing and running the deal.ii tests 
>> interactively on a KNL compute node. Both compilation and installation were 
>> seemingly successful but all the tests failed. I have attached the output 
>> from make test in the file *test_output.txt*. An example error looks 
>> like the following :
>> 2/11 Test  #1: step.debug ...***Failed  265.99 sec
>> gmake[6]: *** write jobserver: Invalid argument.  Stop.
>> gmake[6]: *** Waiting for unfinished jobs
>> gmake[6]: *** write jobserver: Invalid argument.  Stop.
>> gmake[5]: *** [CMakeFiles/Makefile2:10829: 
>> tests/quick_tests/CMakeFiles/step.debug.run.dir/rule] Error 2
>> gmake[4]: *** [Makefile:3700: step.debug.run] Error 2
>> Test step.debug: BUILD
>> ===   OUTPUT BEGIN  
>> ===
>> step.debug: BUILD failed. Output:
>> [  0%] Built target obj_sundials_inst
>> [ 21%] Built target obj_physics_elasticity_inst
>> [ 30%] Built target obj_dofs_inst
>> [ 25%] Built target obj_matrix_free_inst
>> [ 23%] Built target obj_amd_global_debug
>> [ 34%] Built target obj_umfpack_DI_TRIPLET_MAP_NOX_debug
>> [ 30%] Built target obj_differentiation_ad_inst
>> [ 34%] Built target obj_multigrid_inst
>> [ 59%] Built target obj_physics_inst
>> [ 59%] Built target obj_non_matching_inst
>> [ 59%] Built target obj_gmsh_inst
>> [ 34%] Built target obj_distributed_inst
>> [ 34%] Built target kill-step.debug-OK
>> [ 34%] Built target obj_muparser_debug
>> [ 34%] Built target obj_umfpack_DL_SOLVE_debug
>> [ 59%] Built target obj_numerics_inst
>> [ 59%] Built target obj_boost_system_debug
>> [ 59%] Built target obj_umfpack_DL_STORE_debug
>> [ 59%] Built target obj_umfpack_DL_TRIPLET_MAP_NOX_debug
>> [ 59%] Built target obj_umfpack_DL_TRIPLET_MAP_X_debug
>> [ 59%] Built target obj_umfpack_DI_TSOLVE_debug
>> [ 78%] Built target obj_base_inst
>> [ 78%] Built target obj_grid_inst
>> [ 59%] Built target obj_umfpack_DI_SOLVE_debug
>> [ 34%] Built target obj_amd_int_debug
>> [ 59%] Built target obj_algorithms_inst
>> [ 59%] Built target obj_meshworker_inst
>> [ 59%] Built target obj_hp_inst
>> [ 78%] Built target obj_particle_inst
>> [ 59%] Built target obj_differentiation_sd_inst
>> [ 59%] Built target obj_opencascade_inst
>> [ 78%] Built target obj_lac_inst
>> [ 59%] Built target obj_amd_long_debug
>> [ 59%] Built target obj_umfpack_DL_TSOLVE_debug
>> [ 59%] Built target obj_umfpack_DI_STORE_debug
>> [ 59%] Built target obj_umfpack_DL_TRIPLET_NOMAP_X_debug
>> [ 59%] Built target obj_umfpack_DI_TRIPLET_MAP_X_debug
>> [ 59%] Built target obj_umfpack_DI_TRIPLET_NOMAP_X_debug
>> [ 59%] Built target obj_umfpack_GENERIC_debug
>> [ 78%] Built target obj_umfpack_DL_ASSEMBLE_debug
>> [ 59%] Built target obj_umfpack_DI_ASSEMBLE_debug
>> [ 84%] Built target obj_fe_inst
>>
>>
>> step.debug: **BUILD failed***
>>
>> ===OUTPUT END  
>>  ===
>> Expected stage PASSED - aborting
>> CMake Error at 
>> /global/project/projectdirs/m2360/packagesCPFE/dealii/cmake/scripts/run_test.cmake:140
>>  
>> (MESSAGE):
>>   *** abort
>>
>>
>> Best,
>> Aaditya
>>
>> On Friday, October 23, 2020 at 2:00:00 AM UTC-4 Aaditya Lakshmanan wrote:
>>
>>> Hi Wolfgang,
>>>Thanks for your suggestion. I just checked that the Cori login nodes 
>>> are Haswell processors while the compute nodes on which I wish to run 
>>> simulations eventually are KNL processors(certain modules are also with 
>>> that intention). I will use idev to access the compute nodes interactively 
>>> and try the build, installation and testing phases again. 
>>>
>>> Best,
>>> Aaditya
>>>
>>> On Thursday, October 22, 2020 at 11:29:38 PM UTC-4 Wolfgang 

Re: [deal.II] Re: Pressure-correction scheme: Behaviour of the H1 norm on the Taylor-Green vortex

2020-10-24 Thread Jose Lara
>
> I don't support this idea. If you take the Taylor-Green vortex for
> example, the velocity decays with exp(-2 nu t), while the pressure
> decays with the square of that term. Why do you expect error from your
> pressure-correction scheme and your error from the time discretization
> to converge in the same way? Note that they are not completely
> independent of course.


The assumption was naive on my part. I had not considered that the
pressure-correction scheme could introduce such a strong spatial dependency
on the error saturation of the pressure field alone.

I am not convinced that this completely eliminates all influence of
> the pressure-correction scheme. I assume that it still gives an O(dt)
> additional error (or maybe something higher order depending on your
> scheme).
>

Yes, you are right. If those factors would reduce/eliminate the influence
of the pressure-correction scheme, the Guermond problem would then showcase
a lower convergence order and/or faster saturation than the Taylor-Green
vortex, which is not the case.

Am Sa., 24. Okt. 2020 um 17:19 Uhr schrieb Timo Heister :

> > From my understanding, the lowest bound of the error on each norm is set
> either by the spatial or the temporal discretization. I was kind of
> expecting that the L2- and H1-Norms share a similar spatial and time
> dependence, i.e. that each field reaches its lowest bound simultaneously,
> and that they do so with a similar convergence rate evolution. Stated
> differently, they start with an order of convergence which remains constant
> for a given time step range. After reaching a small time step size, the
> convergence order tends to zero as the lowest bound of the error is reached.
>
> I don't support this idea. If you take the Taylor-Green vortex for
> example, the velocity decays with exp(-2 nu t), while the pressure
> decays with the square of that term. Why do you expect error from your
> pressure-correction scheme and your error from the time discretization
> to converge in the same way? Note that they are not completely
> independent of course.
>
> > Furthermore, for the Taylor-Green vortex I am using periodic boundary
> conditions on all boundaries which rules out the corner singularities which
> plague the pressure-correction scheme. The solution in itself is smooth, so
> I had not thought the error of the boundary layer could have had such an
> effect on the H1-Norm.
>
> I am not convinced that this completely eliminates all influence of
> the pressure-correction scheme. I assume that it still gives an O(dt)
> additional error (or maybe something higher order depending on your
> scheme).
>
>
>
> --
> Timo Heister
> http://www.math.clemson.edu/~heister/
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/dealii/0MkM-7k6tjo/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/dealii/CAMRj59GKEjXp0t_5EED8myCmi0pcdwjA7GgZ2mb5vL8DpDQCCA%40mail.gmail.com
> .
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAL8bY6K_wxFzQOZ5it6YZiEbo9wbv0tFNKuM4F97PGE0s4UEQA%40mail.gmail.com.


Re: [deal.II] Re: Pressure-correction scheme: Behaviour of the H1 norm on the Taylor-Green vortex

2020-10-24 Thread Timo Heister
> From my understanding, the lowest bound of the error on each norm is set 
> either by the spatial or the temporal discretization. I was kind of expecting 
> that the L2- and H1-Norms share a similar spatial and time dependence, i.e. 
> that each field reaches its lowest bound simultaneously, and that they do so 
> with a similar convergence rate evolution. Stated differently, they start 
> with an order of convergence which remains constant for a given time step 
> range. After reaching a small time step size, the convergence order tends to 
> zero as the lowest bound of the error is reached.

I don't support this idea. If you take the Taylor-Green vortex for
example, the velocity decays with exp(-2 nu t), while the pressure
decays with the square of that term. Why do you expect error from your
pressure-correction scheme and your error from the time discretization
to converge in the same way? Note that they are not completely
independent of course.

> Furthermore, for the Taylor-Green vortex I am using periodic boundary 
> conditions on all boundaries which rules out the corner singularities which 
> plague the pressure-correction scheme. The solution in itself is smooth, so I 
> had not thought the error of the boundary layer could have had such an effect 
> on the H1-Norm.

I am not convinced that this completely eliminates all influence of
the pressure-correction scheme. I assume that it still gives an O(dt)
additional error (or maybe something higher order depending on your
scheme).



-- 
Timo Heister
http://www.math.clemson.edu/~heister/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAMRj59GKEjXp0t_5EED8myCmi0pcdwjA7GgZ2mb5vL8DpDQCCA%40mail.gmail.com.


Re: [deal.II] dealii installation issue on NERSC Cori

2020-10-24 Thread Timo Heister
Running the tests seems to have triggered a rebuild of deal.iII. Can you
try running some of the tutorial steps instead?

On Fri, Oct 23, 2020, 17:05 Aaditya Lakshmanan 
wrote:

> Hi Wolfgang,
>I tried compiling, installing and running the deal.ii tests
> interactively on a KNL compute node. Both compilation and installation were
> seemingly successful but all the tests failed. I have attached the output
> from make test in the file *test_output.txt*. An example error looks like
> the following :
> 2/11 Test  #1: step.debug ...***Failed  265.99 sec
> gmake[6]: *** write jobserver: Invalid argument.  Stop.
> gmake[6]: *** Waiting for unfinished jobs
> gmake[6]: *** write jobserver: Invalid argument.  Stop.
> gmake[5]: *** [CMakeFiles/Makefile2:10829:
> tests/quick_tests/CMakeFiles/step.debug.run.dir/rule] Error 2
> gmake[4]: *** [Makefile:3700: step.debug.run] Error 2
> Test step.debug: BUILD
> ===   OUTPUT BEGIN
> ===
> step.debug: BUILD failed. Output:
> [  0%] Built target obj_sundials_inst
> [ 21%] Built target obj_physics_elasticity_inst
> [ 30%] Built target obj_dofs_inst
> [ 25%] Built target obj_matrix_free_inst
> [ 23%] Built target obj_amd_global_debug
> [ 34%] Built target obj_umfpack_DI_TRIPLET_MAP_NOX_debug
> [ 30%] Built target obj_differentiation_ad_inst
> [ 34%] Built target obj_multigrid_inst
> [ 59%] Built target obj_physics_inst
> [ 59%] Built target obj_non_matching_inst
> [ 59%] Built target obj_gmsh_inst
> [ 34%] Built target obj_distributed_inst
> [ 34%] Built target kill-step.debug-OK
> [ 34%] Built target obj_muparser_debug
> [ 34%] Built target obj_umfpack_DL_SOLVE_debug
> [ 59%] Built target obj_numerics_inst
> [ 59%] Built target obj_boost_system_debug
> [ 59%] Built target obj_umfpack_DL_STORE_debug
> [ 59%] Built target obj_umfpack_DL_TRIPLET_MAP_NOX_debug
> [ 59%] Built target obj_umfpack_DL_TRIPLET_MAP_X_debug
> [ 59%] Built target obj_umfpack_DI_TSOLVE_debug
> [ 78%] Built target obj_base_inst
> [ 78%] Built target obj_grid_inst
> [ 59%] Built target obj_umfpack_DI_SOLVE_debug
> [ 34%] Built target obj_amd_int_debug
> [ 59%] Built target obj_algorithms_inst
> [ 59%] Built target obj_meshworker_inst
> [ 59%] Built target obj_hp_inst
> [ 78%] Built target obj_particle_inst
> [ 59%] Built target obj_differentiation_sd_inst
> [ 59%] Built target obj_opencascade_inst
> [ 78%] Built target obj_lac_inst
> [ 59%] Built target obj_amd_long_debug
> [ 59%] Built target obj_umfpack_DL_TSOLVE_debug
> [ 59%] Built target obj_umfpack_DI_STORE_debug
> [ 59%] Built target obj_umfpack_DL_TRIPLET_NOMAP_X_debug
> [ 59%] Built target obj_umfpack_DI_TRIPLET_MAP_X_debug
> [ 59%] Built target obj_umfpack_DI_TRIPLET_NOMAP_X_debug
> [ 59%] Built target obj_umfpack_GENERIC_debug
> [ 78%] Built target obj_umfpack_DL_ASSEMBLE_debug
> [ 59%] Built target obj_umfpack_DI_ASSEMBLE_debug
> [ 84%] Built target obj_fe_inst
>
>
> step.debug: **BUILD failed***
>
> ===OUTPUT END
>  ===
> Expected stage PASSED - aborting
> CMake Error at
> /global/project/projectdirs/m2360/packagesCPFE/dealii/cmake/scripts/run_test.cmake:140
> (MESSAGE):
>   *** abort
>
>
> Best,
> Aaditya
>
> On Friday, October 23, 2020 at 2:00:00 AM UTC-4 Aaditya Lakshmanan wrote:
>
>> Hi Wolfgang,
>>Thanks for your suggestion. I just checked that the Cori login nodes
>> are Haswell processors while the compute nodes on which I wish to run
>> simulations eventually are KNL processors(certain modules are also with
>> that intention). I will use idev to access the compute nodes interactively
>> and try the build, installation and testing phases again.
>>
>> Best,
>> Aaditya
>>
>> On Thursday, October 22, 2020 at 11:29:38 PM UTC-4 Wolfgang Bangerth
>> wrote:
>>
>>>
>>> Aaditya,
>>> is NERSC Cori a machine where the front-end node runs a different
>>> processor/system than the compute nodes? If so, you're building a
>>> library for
>>> the compute nodes, and the tests are also built for the compute nodes.
>>> But
>>> you're trying to run the test executables on the front-end node -- which
>>> might
>>> help explain the error you see.
>>>
>>> Best
>>> W.
>>>
>>>
>>> On 10/22/20 7:52 PM, Aaditya Lakshmanan wrote:
>>> > Hi Everyone,
>>> >I have been trying to install deal.ii on the NERSC Cori System and
>>> after a
>>> > seemingly successful cmake and make install, make test yields failure
>>> for all
>>> > the tests setup. I briefly detail the procedure I followed and the
>>> outputs and
>>> > errors obtained.
>>> >
>>> > As a pre-requisite I installed p4est using the setup script(with some
>>> > modifications) and petsc manually with the configure.py script and
>>> appropriate
>>> > settings. The following modules were loaded :
>>> >
>>> > Currently Loaded Modulefiles:
>>> >   1) modules/3.2.11.4  5) craype-network-aries
>>> 9)
>>> > PrgEnv-intel/6.0.5   13) 

Re: [deal.II] Re: Pressure-correction scheme: Behaviour of the H1 norm on the Taylor-Green vortex

2020-10-24 Thread Jose Lara
Hello Wolfgang, Marthin, Bruno, Richard and Timo,


> 'm not entirely clear about what your question is. Are you seeing
> convergence
> rates that are too low or too large? It is not uncommon to have cases
> where a
> scheme converges too fast (the convergence rate is too large); this is
> typically the case because the solution has a symmetry.
>
> Best
>   W.


Apologies for not explaining myself clearer. I will try it again:
>From my understanding, the lowest bound of the error on each norm is set
either by the spatial or the temporal discretization. I was kind of
expecting that the L2- and H1-Norms share a similar spatial and time
dependence, i.e. that each field reaches its lowest bound simultaneously,
and that they do so with a similar convergence rate evolution. Stated
differently, they start with an order of convergence which remains constant
for a given time step range. After reaching a small time step size, the
convergence order tends to zero as the lowest bound of the error is reached.
>From the tests' results I can see that H1-Norm of the pressure has a
considerably stronger spatial dependence than the velocity, as it reaches
its lowest bound while the velocity still has a constant convergence order.
This behaviour is also seen in the L2- and Linfty-Norm but in a much more
milder scale, as seen in the spatial convergence test. My question is if
this stronger spatial dependency of the pressure is problem dependent or if
it is intrinsic to the pressure-correction scheme.

While I have not experimented in detail with the step-35 program, we have
> done extensive studies on similar problems in
> https://doi.org/10.1016/j.jcp.2017.09.031 (or
> https://arxiv.org/abs/1706.09252 for an earlier preprint version of the
> same manuscript) including the pressure correction scheme. While the
> spatial discretization is DG where some of the issues are , the experience
> from our experiments suggests that the pressure correction scheem should
> not behave too differently from other time discretization schemes with
> similar ingredients (say BDF-2 on the fully coupled scheme). In other
> words, I would expect second order convergence in the pressure for
> Taylor-Hood elements. I should note that there are some subtle issues with
> boundary conditions in projection schemes, so I cannot exclude some hidden
> problems with the step-35 implementation or the way you set up the
> experiments.
>
> Best,
> Martin
>
Thanks for the manuscript, there I notice that the results shown are those
of the behaviour of the L2-Norm. My finite element implementation behaves
similarly in the L2-Norm to the convergence rates in your paper (BFD2 leads
to a 2nd order convergence on both velocity and, for the most part, on
pressure). Did you also analyse the H1-Norm by any chance? There is where I
see the stronger spatial dependency of the pressure.
On another note, it caught my eye that you split the Neumann boundary
conditions. I have not done tests with them yet, but what is the benefit of
doing this or why is it necessary? Furthermore, my next step would be the
DFG 2D-2 benchmark. There, you computed the traction force using the
symmetric gradient instead of the normal gradient. While your formulation
would be for me the correct formulation, as the stress tensor is so
defined, on the benchmark papers they use the normal gradient. This has
caused me some confusion, as to which formulation should I implement for
the benchmark.

Hello Jose,
> I wish I could help, but I second Wolfgang's question.
> Is your code available somewhere? I would be glad to take a look at it and
> compare the solutions for the same problems using different formulations. I
> would expect that if you fix the issue with boundary conditions (those
> described in the Guermond paper, that is the  "pressure boundary layer")
> then you would recover exactly what you should get with traditional schemes
> using Taylor-Hood element (as Martin discussed).
>

I tried reformulating the question above. Hopefully it is clearer now. I
will clean the code up and get back to you. The pressure-correction scheme
I am using is the incremental rotational, which has the smallest error
caused due to the boundary layer. Could the boundary layer in this case
still cause such a strong influence?

if the velocity error you have is low enough, the somewhat time-independent
> PPE you solve given that velocity,
> you might get high enough rates up to a certain point -> and that point
> might lie below the error you see on those 3 levels.
> So, try to go for smaller timesteps (keeping the Re the same) and use more
> spatial refinement levels.
> In general I would also recommend changing the Reynold's number a bit
> around and see what happens - maybe it is an effect that is limited to low
> Re?
>
> Anyways, having a bigger convergence rate than expected is a nice problem
> to have, isn't it? ; )
> -I would not think it is caused by a bug given the other rates looking
> just as expected!
>
>