Re: [petsc-users] VecDuplicate for FFTW-Vec causes VecDestroy to fail conditionally on VecLoad

2019-11-01 Thread Sajid Ali via petsc-users
 Hi Junchao/Barry,

It doesn't really matter what the h5 file contains,  so I'm attaching a
lightly edited script of src/vec/vec/examples/tutorials/ex10.c which should
produce a vector to be used as input for the above test case. (I'm working
with ` --with-scalar-type=complex`).

Now that I think of it, fixing this bug is not important, I can workaround
the issue by creating a new vector with VecCreateMPI and accept the small
loss in performance of VecPointwiseMult due to misaligned layouts. If it's
a small fix it may be worth the time, but fixing this is not a big priority
right now. If it's a complicated fix, this issue can serve as a note to
future users.


Thank You,
Sajid Ali
Applied Physics
Northwestern University
s-sajid-ali.github.io


ex10.c
Description: Binary data


Re: [petsc-users] VecDuplicate for FFTW-Vec causes VecDestroy to fail conditionally on VecLoad

2019-11-01 Thread Smith, Barry F. via petsc-users



> On Nov 1, 2019, at 4:50 PM, Zhang, Junchao via petsc-users 
>  wrote:
> 
> I know nothing about Vec FFTW,

  You are lucky :-)

> but if you can provide hdf5 files in your test, I will see if I can reproduce 
> it.
> --Junchao Zhang
> 
> 
> On Fri, Nov 1, 2019 at 2:08 PM Sajid Ali via petsc-users 
>  wrote:
> Hi PETSc-developers, 
> 
> I'm unable to debug a crash with VecDestroy that seems to depend only on 
> whether or not a VecLoad was performed on a vector that was generated by 
> duplicating one generated by MatCreateVecsFFTW. 
> 
> I'm attaching two examples ex1.c and ex2.c. The first one just creates 
> vectors aligned as per FFTW layout, duplicates one of them and destroys all 
> at the end. A bug related to this was fixed sometime between the 3.11 release 
> and 3.12 release. I've tested this code with the versions 3.11.1 and 3.12.1 
> and as expected it runs with no issues for 3.12.1 and fails with 3.11.1.
> 
> Now, the second one just adds a few lines which load a vector from memory to 
> the duplicated vector before destroying all. For some reason, this code fails 
> for both 3.11.1 and 3.12.1 versions. I'm lost as to what may cause this error 
> and would appreciate any help in how to debug this. Thanks in advance for the 
> help! 
> 
> PS: I've attached the two codes, ex1.c/ex2.c, the log files for both make and 
> run and finally a bash script that was run to compile/log and control the 
> version of petsc used. 
> 
> 
> --
> Sajid Ali
> Applied Physics
> Northwestern University
> s-sajid-ali.github.io



Re: [petsc-users] VecDuplicate for FFTW-Vec causes VecDestroy to fail conditionally on VecLoad

2019-11-01 Thread Zhang, Junchao via petsc-users
I know nothing about Vec FFTW, but if you can provide hdf5 files in your test, 
I will see if I can reproduce it.
--Junchao Zhang


On Fri, Nov 1, 2019 at 2:08 PM Sajid Ali via petsc-users 
mailto:petsc-users@mcs.anl.gov>> wrote:
Hi PETSc-developers,

I'm unable to debug a crash with VecDestroy that seems to depend only on 
whether or not a VecLoad was performed on a vector that was generated by 
duplicating one generated by MatCreateVecsFFTW.

I'm attaching two examples ex1.c and ex2.c. The first one just creates vectors 
aligned as per FFTW layout, duplicates one of them and destroys all at the end. 
A bug related to this was fixed sometime between the 3.11 release and 3.12 
release. I've tested this code with the versions 3.11.1 and 3.12.1 and as 
expected it runs with no issues for 3.12.1 and fails with 3.11.1.

Now, the second one just adds a few lines which load a vector from memory to 
the duplicated vector before destroying all. For some reason, this code fails 
for both 3.11.1 and 3.12.1 versions. I'm lost as to what may cause this error 
and would appreciate any help in how to debug this. Thanks in advance for the 
help!

PS: I've attached the two codes, ex1.c/ex2.c, the log files for both make and 
run and finally a bash script that was run to compile/log and control the 
version of petsc used.


--
Sajid Ali
Applied Physics
Northwestern University
s-sajid-ali.github.io


Re: [petsc-users] Do the guards against calling MPI_Comm_dup() in PetscCommDuplicate() apply with Fortran?

2019-11-01 Thread Smith, Barry F. via petsc-users



> On Nov 1, 2019, at 10:54 AM, Patrick Sanan  wrote:
> 
> Thanks, Barry. I should have realized that was an ancient version. The 
> cluster does have Open MPI 4.0.1 so I'll see if we can't use that instead. 
> (I'm sure that the old version is there just to provide continuity - the 
> weird thing is that the previous, quite similar, cluster used Open MPI 1.6.5 
> and that seemed to work fine with this application :D )

  Yes, the bug was introduced into OpenMPI at some point and then removed at a 
later point, so it is actually completely reasonable that the older OpenMPI 
worked fine.

   Barry

> 
>> Am 01.11.2019 um 16:24 schrieb Smith, Barry F. :
>> 
>> 
>> Certain OpenMPI versions have bugs where even when you properly duplicate 
>> and then free  communicators it eventually "runs out of communicators". This 
>> is a definitely a bug and was fixed in later OpenMPI versions.  We wasted a 
>> lot of time tracking down this bug in the past. By now it is an old version 
>> of OpenMPI; the OpenMPI site https://www.open-mpi.org/software/ompi/v4.0/ 
>> lists the buggy versions as retired. 
>> 
>>  So the question is should PETSc attempt to change its behavior or add 
>> functionality or hacks to work around this bug?
>> 
>>  My answer is NO. This is a "NEW" cluster! A "NEW" cluster is not running 
>> OpenMPI 2.1 by definition of new.  The cluster manager needs to remove the 
>> buggy version of OpenMPI from their system. If the cluster manager is 
>> incapable of doing the most elementary part of the their job (removing buggy 
>> code) then the application person is stuck having to put hacks into their 
>> code to work around the bugs on their cluster; it cannot be PETSc's 
>> responsibility to distorted itself due to ancient bugs in other software.
>> 
>> Barry
>> 
>> Note that this OpenMPI bug does not affect very many MPI or PETSc codes. It 
>> only affects those codes that completely correctly call duplicate and free 
>> many times. This is why PETSc configure doesn't blacklist the OpenMPI 
>> version (though perhaps it should).
>> 
>> 
>> 
>>> On Nov 1, 2019, at 5:41 AM, Patrick Sanan via petsc-users 
>>>  wrote:
>>> 
>>> Context: I'm trying to track down an error that (only) arises when running 
>>> a Fortran 90 code, using PETSc, on a new cluster. The code creates and 
>>> destroys a linear system (Mat,Vec, and KSP) at each of (many) timesteps. 
>>> The error message from a user looks like this, which leads me to suspect 
>>> that MPI_Comm_dup() is being called many times and this is eventually a 
>>> problem for this particular MPI implementation (Open MPI 2.1.0):
>>> 
>>> [lo-a2-058:21425] *** An error occurred in MPI_Comm_dup
>>> [lo-a2-058:21425] *** reported by process [487873,2]
>>> [lo-a2-058:21425] *** on communicator MPI COMMUNICATOR 65534 DUP FROM 65533
>>> [lo-a2-058:21425] *** MPI_ERR_INTERN: internal error
>>> [lo-a2-058:21425] *** MPI_ERRORS_ARE_FATAL (processes in this communicator 
>>> will now abort,
>>> [lo-a2-058:21425] ***and potentially your MPI job)
>>> 
>>> Question: I remember some discussion recently (but can't find the thread) 
>>> about not calling MPI_Comm_dup() too many times from PetscCommDuplicate(), 
>>> which would allow one to safely use the (admittedly not optimal) approach 
>>> used in this application code. Is that a correct understanding and would 
>>> the fixes made in that context also apply to Fortran? I don't fully 
>>> understand the details of the MPI techniques used, so thought I'd ask here. 
>>> 
>>> If I hack a simple build-solve-destroy example to run several loops, I see 
>>> a notable difference between C and Fortran examples. With the attached 
>>> ex223.c and ex221f.F90, which just add outer loops (5 iterations) to KSP 
>>> tutorials examples ex23.c and ex21f.F90, respectively, I see the following. 
>>> Note that in the Fortran case, it appears that communicators are actually 
>>> duplicated in each loop, but in the C case, this only happens in the first 
>>> loop:
>>> 
>>> [(arch-maint-extra-opt) tutorials (maint *$%=)]$ ./ex223 -info | grep 
>>> PetscCommDuplicate
>>> [0] PetscCommDuplicate(): Duplicating a communicator 1140850688 -2080374784 
>>> max tags = 268435455
>>> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
>>> -2080374784
>>> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
>>> -2080374784
>>> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
>>> -2080374784
>>> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
>>> -2080374784
>>> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
>>> -2080374784
>>> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
>>> -2080374784
>>> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
>>> -2080374784
>>> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
>>> -2080374784
>>> [0] PetscCommDuplicate(): Using internal 

Re: [petsc-users] Do the guards against calling MPI_Comm_dup() in PetscCommDuplicate() apply with Fortran?

2019-11-01 Thread Smith, Barry F. via petsc-users


  Certain OpenMPI versions have bugs where even when you properly duplicate and 
then free  communicators it eventually "runs out of communicators". This is a 
definitely a bug and was fixed in later OpenMPI versions.  We wasted a lot of 
time tracking down this bug in the past. By now it is an old version of 
OpenMPI; the OpenMPI site https://www.open-mpi.org/software/ompi/v4.0/ lists 
the buggy versions as retired. 

   So the question is should PETSc attempt to change its behavior or add 
functionality or hacks to work around this bug?

   My answer is NO. This is a "NEW" cluster! A "NEW" cluster is not running 
OpenMPI 2.1 by definition of new.  The cluster manager needs to remove the 
buggy version of OpenMPI from their system. If the cluster manager is incapable 
of doing the most elementary part of the their job (removing buggy code) then 
the application person is stuck having to put hacks into their code to work 
around the bugs on their cluster; it cannot be PETSc's responsibility to 
distorted itself due to ancient bugs in other software.

  Barry

Note that this OpenMPI bug does not affect very many MPI or PETSc codes. It 
only affects those codes that completely correctly call duplicate and free many 
times. This is why PETSc configure doesn't blacklist the OpenMPI version 
(though perhaps it should).



> On Nov 1, 2019, at 5:41 AM, Patrick Sanan via petsc-users 
>  wrote:
> 
> Context: I'm trying to track down an error that (only) arises when running a 
> Fortran 90 code, using PETSc, on a new cluster. The code creates and destroys 
> a linear system (Mat,Vec, and KSP) at each of (many) timesteps. The error 
> message from a user looks like this, which leads me to suspect that 
> MPI_Comm_dup() is being called many times and this is eventually a problem 
> for this particular MPI implementation (Open MPI 2.1.0):
> 
> [lo-a2-058:21425] *** An error occurred in MPI_Comm_dup
> [lo-a2-058:21425] *** reported by process [487873,2]
> [lo-a2-058:21425] *** on communicator MPI COMMUNICATOR 65534 DUP FROM 65533
> [lo-a2-058:21425] *** MPI_ERR_INTERN: internal error
> [lo-a2-058:21425] *** MPI_ERRORS_ARE_FATAL (processes in this communicator 
> will now abort,
> [lo-a2-058:21425] ***and potentially your MPI job)
> 
> Question: I remember some discussion recently (but can't find the thread) 
> about not calling MPI_Comm_dup() too many times from PetscCommDuplicate(), 
> which would allow one to safely use the (admittedly not optimal) approach 
> used in this application code. Is that a correct understanding and would the 
> fixes made in that context also apply to Fortran? I don't fully understand 
> the details of the MPI techniques used, so thought I'd ask here. 
> 
> If I hack a simple build-solve-destroy example to run several loops, I see a 
> notable difference between C and Fortran examples. With the attached ex223.c 
> and ex221f.F90, which just add outer loops (5 iterations) to KSP tutorials 
> examples ex23.c and ex21f.F90, respectively, I see the following. Note that 
> in the Fortran case, it appears that communicators are actually duplicated in 
> each loop, but in the C case, this only happens in the first loop:
> 
> [(arch-maint-extra-opt) tutorials (maint *$%=)]$ ./ex223 -info | grep 
> PetscCommDuplicate
> [0] PetscCommDuplicate(): Duplicating a communicator 1140850688 -2080374784 
> max tags = 268435455
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] PetscCommDuplicate(): Using internal PETSc communicator 1140850688 
> -2080374784
> [0] 

Re: [petsc-users] VI: RS vs SS

2019-11-01 Thread Munson, Todd via petsc-users

Yes, that looks weird.  Can you send me directly the linear problem (M, q, l, 
and u)?  I
will take a look and run some other diagnostics with some of my other tools.

Thanks, Todd.

> On Nov 1, 2019, at 10:14 AM, Alexander Lindsay  
> wrote:
> 
> No, the matrix is not symmetric because of how we impose some Dirichlet 
> conditions on the boundary. I could easily give you the Jacobian, for one of 
> the "bad" problems. But at least in the case of RSLS, I don't know whether 
> the algorithm is performing badly, or whether the slow convergence is simply 
> a property of the algorithm. Here's a VI monitor history for a representative 
> "bad" solve.
> 
>   0 SNES VI Function norm 0.229489 Active lower constraints 0/1 upper 
> constraints 0/1 Percent of total 0. Percent of bounded 0.
>   1 SNES VI Function norm 0.365268 Active lower constraints 83/85 upper 
> constraints 83/85 Percent of total 0.207241 Percent of bounded 0.
>   2 SNES VI Function norm 0.495088 Active lower constraints 82/84 upper 
> constraints 82/84 Percent of total 0.204744 Percent of bounded 0.
>   3 SNES VI Function norm 0.478328 Active lower constraints 81/83 upper 
> constraints 81/83 Percent of total 0.202247 Percent of bounded 0.
>   4 SNES VI Function norm 0.46163 Active lower constraints 80/82 upper 
> constraints 80/82 Percent of total 0.19975 Percent of bounded 0.
>   5 SNES VI Function norm 0.444996 Active lower constraints 79/81 upper 
> constraints 79/81 Percent of total 0.197253 Percent of bounded 0.
>   6 SNES VI Function norm 0.428424 Active lower constraints 78/80 upper 
> constraints 78/80 Percent of total 0.194757 Percent of bounded 0.
>   7 SNES VI Function norm 0.411916 Active lower constraints 77/79 upper 
> constraints 77/79 Percent of total 0.19226 Percent of bounded 0.
>   8 SNES VI Function norm 0.395472 Active lower constraints 76/78 upper 
> constraints 76/78 Percent of total 0.189763 Percent of bounded 0.
>   9 SNES VI Function norm 0.379092 Active lower constraints 75/77 upper 
> constraints 75/77 Percent of total 0.187266 Percent of bounded 0.
>  10 SNES VI Function norm 0.362776 Active lower constraints 74/76 upper 
> constraints 74/76 Percent of total 0.184769 Percent of bounded 0.
>  11 SNES VI Function norm 0.346525 Active lower constraints 73/75 upper 
> constraints 73/75 Percent of total 0.182272 Percent of bounded 0.
>  12 SNES VI Function norm 0.330338 Active lower constraints 72/74 upper 
> constraints 72/74 Percent of total 0.179775 Percent of bounded 0.
>  13 SNES VI Function norm 0.314217 Active lower constraints 71/73 upper 
> constraints 71/73 Percent of total 0.177278 Percent of bounded 0.
>  14 SNES VI Function norm 0.298162 Active lower constraints 70/72 upper 
> constraints 70/72 Percent of total 0.174782 Percent of bounded 0.
>  15 SNES VI Function norm 0.282173 Active lower constraints 69/71 upper 
> constraints 69/71 Percent of total 0.172285 Percent of bounded 0.
>  16 SNES VI Function norm 0.26625 Active lower constraints 68/70 upper 
> constraints 68/70 Percent of total 0.169788 Percent of bounded 0.
>  17 SNES VI Function norm 0.250393 Active lower constraints 67/69 upper 
> constraints 67/69 Percent of total 0.167291 Percent of bounded 0.
>  18 SNES VI Function norm 0.234604 Active lower constraints 66/68 upper 
> constraints 66/68 Percent of total 0.164794 Percent of bounded 0.
>  19 SNES VI Function norm 0.218882 Active lower constraints 65/67 upper 
> constraints 65/67 Percent of total 0.162297 Percent of bounded 0.
>  20 SNES VI Function norm 0.203229 Active lower constraints 64/66 upper 
> constraints 64/66 Percent of total 0.1598 Percent of bounded 0.
>  21 SNES VI Function norm 0.187643 Active lower constraints 63/65 upper 
> constraints 63/65 Percent of total 0.157303 Percent of bounded 0.
>  22 SNES VI Function norm 0.172126 Active lower constraints 62/64 upper 
> constraints 62/64 Percent of total 0.154806 Percent of bounded 0.
>  23 SNES VI Function norm 0.156679 Active lower constraints 61/63 upper 
> constraints 61/63 Percent of total 0.15231 Percent of bounded 0.
>  24 SNES VI Function norm 0.141301 Active lower constraints 60/62 upper 
> constraints 60/62 Percent of total 0.149813 Percent of bounded 0.
>  25 SNES VI Function norm 0.125993 Active lower constraints 59/61 upper 
> constraints 59/61 Percent of total 0.147316 Percent of bounded 0.
>  26 SNES VI Function norm 0.110755 Active lower constraints 58/60 upper 
> constraints 58/60 Percent of total 0.144819 Percent of bounded 0.
>  27 SNES VI Function norm 0.0955886 Active lower constraints 57/59 upper 
> constraints 57/59 Percent of total 0.142322 Percent of bounded 0.
>  28 SNES VI Function norm 0.0804936 Active lower constraints 56/58 upper 
> constraints 56/58 Percent of total 0.139825 Percent of bounded 0.
>  29 SNES VI Function norm 0.0654705 Active lower constraints 55/57 upper 
> constraints 55/57 Percent of total 0.137328 Percent of bounded 0.
>  30 SNES VI Function norm 0.0505198 Active lower constraints 

Re: [petsc-users] VI: RS vs SS

2019-11-01 Thread Alexander Lindsay via petsc-users
No, the matrix is not symmetric because of how we impose some Dirichlet
conditions on the boundary. I could easily give you the Jacobian, for one
of the "bad" problems. But at least in the case of RSLS, I don't know
whether the algorithm is performing badly, or whether the slow convergence
is simply a property of the algorithm. Here's a VI monitor history for a
representative "bad" solve.

  0 SNES VI Function norm 0.229489 Active lower constraints 0/1 upper
constraints 0/1 Percent of total 0. Percent of bounded 0.
  1 SNES VI Function norm 0.365268 Active lower constraints 83/85 upper
constraints 83/85 Percent of total 0.207241 Percent of bounded 0.
  2 SNES VI Function norm 0.495088 Active lower constraints 82/84 upper
constraints 82/84 Percent of total 0.204744 Percent of bounded 0.
  3 SNES VI Function norm 0.478328 Active lower constraints 81/83 upper
constraints 81/83 Percent of total 0.202247 Percent of bounded 0.
  4 SNES VI Function norm 0.46163 Active lower constraints 80/82 upper
constraints 80/82 Percent of total 0.19975 Percent of bounded 0.
  5 SNES VI Function norm 0.444996 Active lower constraints 79/81 upper
constraints 79/81 Percent of total 0.197253 Percent of bounded 0.
  6 SNES VI Function norm 0.428424 Active lower constraints 78/80 upper
constraints 78/80 Percent of total 0.194757 Percent of bounded 0.
  7 SNES VI Function norm 0.411916 Active lower constraints 77/79 upper
constraints 77/79 Percent of total 0.19226 Percent of bounded 0.
  8 SNES VI Function norm 0.395472 Active lower constraints 76/78 upper
constraints 76/78 Percent of total 0.189763 Percent of bounded 0.
  9 SNES VI Function norm 0.379092 Active lower constraints 75/77 upper
constraints 75/77 Percent of total 0.187266 Percent of bounded 0.
 10 SNES VI Function norm 0.362776 Active lower constraints 74/76 upper
constraints 74/76 Percent of total 0.184769 Percent of bounded 0.
 11 SNES VI Function norm 0.346525 Active lower constraints 73/75 upper
constraints 73/75 Percent of total 0.182272 Percent of bounded 0.
 12 SNES VI Function norm 0.330338 Active lower constraints 72/74 upper
constraints 72/74 Percent of total 0.179775 Percent of bounded 0.
 13 SNES VI Function norm 0.314217 Active lower constraints 71/73 upper
constraints 71/73 Percent of total 0.177278 Percent of bounded 0.
 14 SNES VI Function norm 0.298162 Active lower constraints 70/72 upper
constraints 70/72 Percent of total 0.174782 Percent of bounded 0.
 15 SNES VI Function norm 0.282173 Active lower constraints 69/71 upper
constraints 69/71 Percent of total 0.172285 Percent of bounded 0.
 16 SNES VI Function norm 0.26625 Active lower constraints 68/70 upper
constraints 68/70 Percent of total 0.169788 Percent of bounded 0.
 17 SNES VI Function norm 0.250393 Active lower constraints 67/69 upper
constraints 67/69 Percent of total 0.167291 Percent of bounded 0.
 18 SNES VI Function norm 0.234604 Active lower constraints 66/68 upper
constraints 66/68 Percent of total 0.164794 Percent of bounded 0.
 19 SNES VI Function norm 0.218882 Active lower constraints 65/67 upper
constraints 65/67 Percent of total 0.162297 Percent of bounded 0.
 20 SNES VI Function norm 0.203229 Active lower constraints 64/66 upper
constraints 64/66 Percent of total 0.1598 Percent of bounded 0.
 21 SNES VI Function norm 0.187643 Active lower constraints 63/65 upper
constraints 63/65 Percent of total 0.157303 Percent of bounded 0.
 22 SNES VI Function norm 0.172126 Active lower constraints 62/64 upper
constraints 62/64 Percent of total 0.154806 Percent of bounded 0.
 23 SNES VI Function norm 0.156679 Active lower constraints 61/63 upper
constraints 61/63 Percent of total 0.15231 Percent of bounded 0.
 24 SNES VI Function norm 0.141301 Active lower constraints 60/62 upper
constraints 60/62 Percent of total 0.149813 Percent of bounded 0.
 25 SNES VI Function norm 0.125993 Active lower constraints 59/61 upper
constraints 59/61 Percent of total 0.147316 Percent of bounded 0.
 26 SNES VI Function norm 0.110755 Active lower constraints 58/60 upper
constraints 58/60 Percent of total 0.144819 Percent of bounded 0.
 27 SNES VI Function norm 0.0955886 Active lower constraints 57/59 upper
constraints 57/59 Percent of total 0.142322 Percent of bounded 0.
 28 SNES VI Function norm 0.0804936 Active lower constraints 56/58 upper
constraints 56/58 Percent of total 0.139825 Percent of bounded 0.
 29 SNES VI Function norm 0.0654705 Active lower constraints 55/57 upper
constraints 55/57 Percent of total 0.137328 Percent of bounded 0.
 30 SNES VI Function norm 0.0505198 Active lower constraints 54/56 upper
constraints 54/56 Percent of total 0.134831 Percent of bounded 0.
 31 SNES VI Function norm 0.0356422 Active lower constraints 53/55 upper
constraints 53/55 Percent of total 0.132335 Percent of bounded 0.
 32 SNES VI Function norm 0.020838 Active lower constraints 52/54 upper
constraints 52/54 Percent of total 0.129838 Percent of bounded 0.
 33 SNES VI Function norm 0.0061078 Active lower constraints