Bug#791676: libmpich-dev 3.4~a2+really3.3.2-2 has broken Fortran due to gfortran-10 specific flags

2020-08-10 Thread Jed Brown
I'm replying here because I think it's part of this bug, but more severe
in practice.  It was evidently built with gfortran-10, but depends on
gfortran-9, which doesn't recognize these arguments.

$ cat > a.f90
program main
end
^D
$ mpifort a.f90 
f95: error: unrecognized command line option ‘-fallow-invalid-boz’
f95: error: unrecognized command line option ‘-fallow-argument-mismatch’; did 
you mean ‘-Wno-argument-mismatch’?

This can be worked around if one has gfortran-10 installed:

$ MPICH_CC=gfortran-10 mpifort a.f90
$


$ mpifort -show
f95 -O2 -fdebug-prefix-map=/build/mpich-Aiaw9P/mpich-3.4~a2+really3.3.2=. 
-fstack-protector-strong -fallow-invalid-boz -fallow-argument-mismatch 
-Wl,-z,relro -I/usr/include/x86_64-linux-gnu/mpich 
-I/usr/include/x86_64-linux-gnu/mpich -L/usr/lib/x86_64-linux-gnu -lmpichfort 
-lmpich

-- 
debian-science-maintainers mailing list
debian-science-maintainers@alioth-lists.debian.net
https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/debian-science-maintainers

Bug#953116: [petsc-maint] 32 bit vs 64 bit PETSc

2020-05-25 Thread Jed Brown
Drew Parsons  writes:

> On 2020-05-23 14:49, Drew Parsons wrote:
>> On 2020-05-23 14:18, Jed Brown wrote:
>> 
>>> I wonder if you are aware of any static analysis tools that can
>>> flag implicit conversions of this sort:
>>> 
>>> int64_t n = ...;
>>> for (int32_t i=0; i>>   ...
>>> }
>>> 
>>> There is -fsanitize=signed-integer-overflow (which generates a runtime
>>> error message), but that requires data to cause overflow at every
>>> possible location.
>> 
>> I'll ask the Debian gcc team and the Science team if they have ideas 
>> about this.
>> 
>
>
> Hi Jed, Thomas Schiex from Debian Science has replied to this question, 
> suggesting clang-static-analyzer or lgtm:
>
>For open source projects, a few online static analyzers are available 
> and usable for free. This kind of integer type mismach will be caught by 
> most of them. Possibly clang-static-analyzer will do the job. 

I had tried this first, but I think it requires significant work to implement.

> Otherwise, an easy one is lgtm for example. See https://lgtm.com/

This looks interesting, but it isn't obvious how to implement this sort
of check in their language.  They have a bunch of examples, but they
seem simpler.

-- 
debian-science-maintainers mailing list
debian-science-maintainers@alioth-lists.debian.net
https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/debian-science-maintainers

Bug#953116: [petsc-maint] 32 bit vs 64 bit PETSc

2020-05-23 Thread Jed Brown
Drew Parsons  writes:

> Hi, the Debian project is discussing whether we should start providing a 
> 64 bit build of PETSc (which means we'd have to upgrade our entire 
> computational library stack, starting from BLAS and going through MPI, 
> MUMPS, etc).

You don't need to change BLAS or MPI.

> A default PETSc build uses 32 bit addressing to index vectors and 
> matrices.  64 bit addressing can be switched on by configuring with 
> --with-64-bit-indices=1, allowing much larger systems to be handled.
>
> My question for petsc-maint is, is there a reason why 64 bit indexing is 
> not already activated by default on 64-bit systems?  Certainly C 
> pointers and type int would already be 64 bit on these systems.

Umm, x86-64 Linux is LP64, so int is 32-bit.  ILP64 is relatively exotic
these days.

> Is it a question of performance?  Is 32 bit indexing executed faster (in 
> the sense of 2 operations per clock cycle), such that 64-bit addressing 
> is accompanied with a drop in performance? 

Sparse iterative solvers are entirely limited by memory bandwidth;
sizeof(double) + sizeof(int64_t) = 16 incurs a performance hit relative
to 12 for int32_t.  It has nothing to do with clock cycles for
instructions, just memory bandwidth (and usage, but that is less often
an issue).

> In that case we'd only want to use 64-bit PETSc if the system being
> modelled is large enough to actually need it. Or is there a different
> reason that 64 bit indexing is not switched on by default?

It's just about performance, as above.  There are two situations in
which 64-bit is needed.  Historically (supercomputing with thinner
nodes), it has been that you're solving problems with more than 2B dofs.
In today's age of fat nodes, it also happens that a matrix on a single
MPI rank has more than 2B nonzeros.  This is especially common when
using direct solvers.  We'd like to address the latter case by only
promoting the row offsets (thereby avoiding the memory hit of promoting
column indices):

https://gitlab.com/petsc/petsc/-/issues/333


I wonder if you are aware of any static analysis tools that can
flag implicit conversions of this sort:

int64_t n = ...;
for (int32_t i=0; ihttps://alioth-lists.debian.net/cgi-bin/mailman/listinfo/debian-science-maintainers