Re: [petsc-users] Question about memory usage in Multigrid preconditioner

2016-07-11 Thread Dave May
Hi Frank,


On 11 July 2016 at 19:14, frank  wrote:

> Hi Dave,
>
> I re-run the test using bjacobi as the preconditioner on the coarse mesh
> of telescope. The Grid is 3072*256*768 and process mesh is 96*8*24. The
> petsc option file is attached.
> I still got the "Out Of Memory" error. The error occurred before the
> linear solver finished one step. So I don't have the full info from
> ksp_view. The info from ksp_view_pre is attached.
>

Okay - that is essentially useless (sorry)


>
> It seems to me that the error occurred when the decomposition was going to
> be changed.
>

Based on what information?
Running with -info would give us more clues, but will create a ton of
output.
Please try running the case which failed with -info


> I had another test with a grid of 1536*128*384 and the same process mesh
> as above. There was no error. The ksp_view info is attached for comparison.
> Thank you.
>


[3] Here is my crude estimate of your memory usage.
I'll target the biggest memory hogs only to get an order of magnitude
estimate

* The Fine grid operator contains 4223139840 non-zeros --> 1.8 GB per MPI
rank assuming double precision.
The indices for the AIJ could amount to another 0.3 GB (assuming 32 bit
integers)

* You use 5 levels of coarsening, so the other operators should represent
(collectively)
2.1 / 8 + 2.1/8^2 + 2.1/8^3 + 2.1/8^4  ~ 300 MB per MPI rank on the
communicator with 18432 ranks.
The coarse grid should consume ~ 0.5 MB per MPI rank on the communicator
with 18432 ranks.

* You use a reduction factor of 64, making the new communicator with 288
MPI ranks.
PCTelescope will first gather a temporary matrix associated with your
coarse level operator assuming a comm size of 288 living on the comm with
size 18432.
This matrix will require approximately 0.5 * 64 = 32 MB per core on the 288
ranks.
This matrix is then used to form a new MPIAIJ matrix on the subcomm, thus
require another 32 MB per rank.
The temporary matrix is now destroyed.

* Because a DMDA is detected, a permutation matrix is assembled.
This requires 2 doubles per point in the DMDA.
Your coarse DMDA contains 92 x 16 x 48 points.
Thus the permutation matrix will require < 1 MB per MPI rank on the
sub-comm.

* Lastly, the matrix is permuted. This uses MatPtAP(), but the resulting
operator will have the same memory footprint as the unpermuted matrix (32
MB). At any stage in PCTelescope, only 2 operators of size 32 MB are held
in memory when the DMDA is provided.

>From my rough estimates, the worst case memory foot print for any given
core, given your options is approximately
2100 MB + 300 MB + 32 MB + 32 MB + 1 MB  = 2465 MB
This is way below 8 GB.

Note this estimate completely ignores:
(1) the memory required for the restriction operator,
(2) the potential growth in the number of non-zeros per row due to Galerkin
coarsening (I wished -ksp_view_pre reported the output from MatView so we
could see the number of non-zeros required by the coarse level operators)
(3) all temporary vectors required by the CG solver, and those required by
the smoothers.
(4) internal memory allocated by MatPtAP
(5) memory associated with IS's used within PCTelescope

So either I am completely off in my estimates, or you have not carefully
estimated the memory usage of your application code. Hopefully others might
examine/correct my rough estimates

Since I don't have your code I cannot access the latter.
Since I don't have access to the same machine you are running on, I think
we need to take a step back.

[1] What machine are you running on? Send me a URL if its available

[2] What discretization are you using? (I am guessing a scalar 7 point FD
stencil)
If it's a 7 point FD stencil, we should be able to examine the memory usage
of your solver configuration using a standard, light weight existing PETSc
example, run on your machine at the same scale.
This would hopefully enable us to correctly evaluate the actual memory
usage required by the solver configuration you are using.

Thanks,
  Dave


>
>
> Frank
>
>
>
>
> On 07/08/2016 10:38 PM, Dave May wrote:
>
>
>
> On Saturday, 9 July 2016, frank  wrote:
>
>> Hi Barry and Dave,
>>
>> Thank both of you for the advice.
>>
>> @Barry
>> I made a mistake in the file names in last email. I attached the correct
>> files this time.
>> For all the three tests, 'Telescope' is used as the coarse preconditioner.
>>
>> == Test1:   Grid: 1536*128*384,   Process Mesh: 48*4*12
>> Part of the memory usage:  Vector   125124 3971904 0.
>>  Matrix   101 101
>> 9462372 0
>>
>> == Test2: Grid: 1536*128*384,   Process Mesh: 96*8*24
>> Part of the memory usage:  Vector   125124 681672 0.
>>  Matrix   101 101
>> 1462180 0.
>>
>> In theory, the memory usage in Test1 should be 8 times of Test2. In my
>> case, it is about 6 times.
>>
>> == Test3: Grid: 3072*256*768,   

Re: [petsc-users] [Slepc 3.7.1][macOS] install name is set to build folder instead of prefix

2016-07-11 Thread Denis Davydov

> On 11 Jul 2016, at 21:06, Jose E. Roman  wrote:
> 
> I don't understand why I don't get this warning.
> Still I don't see where the problem is. Please tell me exactly what you want 
> me to change, or better make a pull request.

The problem has to do with the assumptions in python scripts. See below values 
of variables which will not work as expected, 
i.e. installName = oldname.replace(self.archDir, self.installDir) will not do 
any replace.
Why you can’t reproduce it — i don’t know.

In any case, i have a working solution, so it’s not an issue for me and it is 
up to you if you want to further investigate it.
I just wanted to point out that this part of the python code does not work in 
all circumstances.

Regards,
Denis.


 so here is what happens. The issue appears when SLEPC_DIR is set to a 
 symlink (the one with “stage below) of a build folder (the one with 
 “private” below). 
 During configure there is a warning that SLEPC_DIR is not the same as 
 current dir (string comparison),
 but one is symlink of another, so all but install_name_tool work. The 
 latter leads to the following values of variables:
 
 oldname
 =/private/var/folders/5k/sqpp24tx3ylds4fgm13pfht0gn/T/davydden/spack-stage/spack-stage-MziaMV/slepc-3.7.1/installed-arch-darwin-c-opt/lib/libslepc.3.7.dylib
 
 installName=/private/var/folders/5k/sqpp24tx3ylds4fgm13pfht0gn/T/davydden/spack-stage/spack-stage-MziaMV/slepc-3.7.1/installed-arch-darwin-c-opt/lib/libslepc.3.7.dylib
 
 archDir
 =/Users/davydden/spack/var/spack/stage/slepc-3.7.1-gimrzhb4mozeus3i2hdmrtjp3tha5pgr/slepc-3.7.1/installed-arch-darwin-c-opt
 
 installDir 
 =/Users/davydden/spack/opt/spack/darwin-elcapitan-x86_64/clang-7.3.0-apple/slepc-3.7.1-gimrzhb4mozeus3i2hdmrtjp3tha5pgr
 
 dst
 =/Users/davydden/spack/opt/spack/darwin-elcapitan-x86_64/clang-7.3.0-apple/slepc-3.7.1-gimrzhb4mozeus3i2hdmrtjp3tha5pgr/lib/libslepc.3.7.1.dylib
 
 As you see, installName wasn’t changed from oldname.
 
 Since the python code rely on SLEPC_DIR be pwd(), i would suggest to 
 through an error instead of the warning to make
 sure that users won’t get in the situation above. Alternative is to make 
 this part of the code more robust.
 
 When SLEPC_DIR==pwd() the patch you referred works.



Re: [petsc-users] [Slepc 3.7.1][macOS] install name is set to build folder instead of prefix

2016-07-11 Thread Jose E. Roman
I don't understand why I don't get this warning.
Still I don't see where the problem is. Please tell me exactly what you want me 
to change, or better make a pull request.
Thanks.
Jose



> El 11 jul 2016, a las 17:06, Denis Davydov  escribió:
> 
> Here is the warning:
> 
> Your SLEPC_DIR may not match the directory you are in
> SLEPC_DIR  
> /Users/davydden/spack/var/spack/stage/slepc-3.7.1-p7hqqclwqvbvra6j44lka3xuc4eycvdg/slepc-3.7.1
>  Current directory 
> /private/var/folders/5k/sqpp24tx3ylds4fgm13pfht0gn/T/davydden/spack-stage/spack-stage-m7Xg8I/slepc-3.7.1
> 
> p.s. this is done within Spack, for a fix see: 
> https://github.com/LLNL/spack/pull/1206
> 
>> On 11 Jul 2016, at 16:53, Jose E. Roman  wrote:
>> 
>> I cannot reproduce this behaviour. If I do for instance this (on OS X El 
>> Capitan):
>> 
>> $ cd ~/tmp
>> $ ln -s $SLEPC_DIR .
>> $ cd slepc-3.7.1
>> $ ./configure
>> $ make
>> $ otool -lv $PETSC_ARCH/lib/libslepc.dylib | grep slepc
>> 
>> I don't get a warning, and the output of otool is the same that would result 
>> if done on $SLEPC_DIR.
>> Which warning are you getting?
>> 
>> Jose
>> 
>> 
>>> El 11 jul 2016, a las 0:48, Denis Davydov  escribió:
>>> 
>>> Hi Jose,
>>> 
>>> so here is what happens. The issue appears when SLEPC_DIR is set to a 
>>> symlink (the one with “stage below) of a build folder (the one with 
>>> “private” below). 
>>> During configure there is a warning that SLEPC_DIR is not the same as 
>>> current dir (string comparison),
>>> but one is symlink of another, so all but install_name_tool work. The 
>>> latter leads to the following values of variables:
>>> 
>>> oldname
>>> =/private/var/folders/5k/sqpp24tx3ylds4fgm13pfht0gn/T/davydden/spack-stage/spack-stage-MziaMV/slepc-3.7.1/installed-arch-darwin-c-opt/lib/libslepc.3.7.dylib
>>> 
>>> installName=/private/var/folders/5k/sqpp24tx3ylds4fgm13pfht0gn/T/davydden/spack-stage/spack-stage-MziaMV/slepc-3.7.1/installed-arch-darwin-c-opt/lib/libslepc.3.7.dylib
>>> 
>>> archDir
>>> =/Users/davydden/spack/var/spack/stage/slepc-3.7.1-gimrzhb4mozeus3i2hdmrtjp3tha5pgr/slepc-3.7.1/installed-arch-darwin-c-opt
>>> 
>>> installDir 
>>> =/Users/davydden/spack/opt/spack/darwin-elcapitan-x86_64/clang-7.3.0-apple/slepc-3.7.1-gimrzhb4mozeus3i2hdmrtjp3tha5pgr
>>> 
>>> dst
>>> =/Users/davydden/spack/opt/spack/darwin-elcapitan-x86_64/clang-7.3.0-apple/slepc-3.7.1-gimrzhb4mozeus3i2hdmrtjp3tha5pgr/lib/libslepc.3.7.1.dylib
>>> 
>>> As you see, installName wasn’t changed from oldname.
>>> 
>>> Since the python code rely on SLEPC_DIR be pwd(), i would suggest to 
>>> through an error instead of the warning to make
>>> sure that users won’t get in the situation above. Alternative is to make 
>>> this part of the code more robust.
>>> 
>>> When SLEPC_DIR==pwd() the patch you referred works.
>>> 
>>> Kind regards,
>>> Denis 
>>> 
>> 
> 



Re: [petsc-users] Diagonalization of a 3D dense matrix

2016-07-11 Thread Matthew Knepley
On Mon, Jul 11, 2016 at 1:22 PM, Ketan Maheshwari <
ketancmaheshw...@gmail.com> wrote:

> Matthew,
>
> I am probably not using the right language but I meant that each element
> has three indices associated with it: x, y, z.
>
> Here is a snapshot:
>
> 1 10 555.7113635929515209e-03
>  1 10 564.2977490038287334e-03
>  1 10 572.8719519782193204e-03
>  1 10 581.4380140927001712e-03
>  1 10 599.9299930690365083e-17
>  1 11  00.e+00
>  1 11  11.5658614070601917e-03
>  1 11  23.1272842098367562e-03
>  1 11  34.6798423857521204e-03
>
> Where the first three columns are the coordinates and the last one is
> value.
>

This is not a matrix. A matrix is a linear operator on some space with a
finite basis: https://en.wikipedia.org/wiki/Matrix_(mathematics)
This is just a set of data points.

Most people would call this a vector, since you have an index I (which
consists of each independent triple) and a value V.


> Could you clarify the meaning of "diagonalization is not a clear concept"
> if it is applicable to this case.
>

There is no one definition of tensor diagonalization.

   Matt


> Thank you,
> --
> Ketan
>
>
> On Mon, Jul 11, 2016 at 1:15 PM, Matthew Knepley 
> wrote:
>
>> On Mon, Jul 11, 2016 at 12:05 PM, Ketan Maheshwari <
>> ketancmaheshw...@gmail.com> wrote:
>>
>>> Hello PETSC-ers,
>>>
>>> I am a research faculty at Univ of Pittsburgh trying to use PETSC/SLEPC
>>> to
>>> obtain the diagonalization of a large matrix using Lanczos or Davidson
>>> method.
>>>
>>> The matrix is a 3 dimensional dense matrix with a total of 216000
>>> elements.
>>>
>>> After looking into some of the examples in PETSC as well SLEPC
>>> implementations
>>> it seems like most of the implementations are with 2 dimensional
>>> matrices.
>>>
>>
>> You will have to explain what you mean by a "3D matrix". A matrix, by
>> definition, has only
>> rows and columns. You may mean a matrix generated from a 3D problem. That
>> should pose
>> no extra difficulty. You may mean a 3-index tensor, in which case
>> diagonalization is not a clear
>> concept.
>>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> So, I was wondering if it is possible to express a 3 dimensional matrix
>>> object
>>> compatible to PETSC so that the SLEPC API could be used to obtain
>>> diagonalization.
>>>
>>> Any suggestions or pointers to documentation or examples would be of
>>> great
>>> help.
>>>
>>> Best,
>>> --
>>> Ketan
>>>
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>
>
>
> --
> Ketan
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] [Slepc 3.7.1][macOS] install name is set to build folder instead of prefix

2016-07-11 Thread Denis Davydov
Here is the warning:

Your SLEPC_DIR may not match the directory you are in
SLEPC_DIR  
/Users/davydden/spack/var/spack/stage/slepc-3.7.1-p7hqqclwqvbvra6j44lka3xuc4eycvdg/slepc-3.7.1
 Current directory 
/private/var/folders/5k/sqpp24tx3ylds4fgm13pfht0gn/T/davydden/spack-stage/spack-stage-m7Xg8I/slepc-3.7.1

p.s. this is done within Spack, for a fix see: 
https://github.com/LLNL/spack/pull/1206

> On 11 Jul 2016, at 16:53, Jose E. Roman  wrote:
> 
> I cannot reproduce this behaviour. If I do for instance this (on OS X El 
> Capitan):
> 
> $ cd ~/tmp
> $ ln -s $SLEPC_DIR .
> $ cd slepc-3.7.1
> $ ./configure
> $ make
> $ otool -lv $PETSC_ARCH/lib/libslepc.dylib | grep slepc
> 
> I don't get a warning, and the output of otool is the same that would result 
> if done on $SLEPC_DIR.
> Which warning are you getting?
> 
> Jose
> 
> 
>> El 11 jul 2016, a las 0:48, Denis Davydov  escribió:
>> 
>> Hi Jose,
>> 
>> so here is what happens. The issue appears when SLEPC_DIR is set to a 
>> symlink (the one with “stage below) of a build folder (the one with 
>> “private” below). 
>> During configure there is a warning that SLEPC_DIR is not the same as 
>> current dir (string comparison),
>> but one is symlink of another, so all but install_name_tool work. The latter 
>> leads to the following values of variables:
>> 
>> oldname
>> =/private/var/folders/5k/sqpp24tx3ylds4fgm13pfht0gn/T/davydden/spack-stage/spack-stage-MziaMV/slepc-3.7.1/installed-arch-darwin-c-opt/lib/libslepc.3.7.dylib
>> 
>> installName=/private/var/folders/5k/sqpp24tx3ylds4fgm13pfht0gn/T/davydden/spack-stage/spack-stage-MziaMV/slepc-3.7.1/installed-arch-darwin-c-opt/lib/libslepc.3.7.dylib
>> 
>> archDir
>> =/Users/davydden/spack/var/spack/stage/slepc-3.7.1-gimrzhb4mozeus3i2hdmrtjp3tha5pgr/slepc-3.7.1/installed-arch-darwin-c-opt
>> 
>> installDir 
>> =/Users/davydden/spack/opt/spack/darwin-elcapitan-x86_64/clang-7.3.0-apple/slepc-3.7.1-gimrzhb4mozeus3i2hdmrtjp3tha5pgr
>> 
>> dst
>> =/Users/davydden/spack/opt/spack/darwin-elcapitan-x86_64/clang-7.3.0-apple/slepc-3.7.1-gimrzhb4mozeus3i2hdmrtjp3tha5pgr/lib/libslepc.3.7.1.dylib
>> 
>> As you see, installName wasn’t changed from oldname.
>> 
>> Since the python code rely on SLEPC_DIR be pwd(), i would suggest to through 
>> an error instead of the warning to make
>> sure that users won’t get in the situation above. Alternative is to make 
>> this part of the code more robust.
>> 
>> When SLEPC_DIR==pwd() the patch you referred works.
>> 
>> Kind regards,
>> Denis 
>> 
> 



Re: [petsc-users] HDF5 and PETSc

2016-07-11 Thread Matthew Knepley
On Mon, Jul 11, 2016 at 3:13 AM, Marco Zocca  wrote:

> Sorry for the previous mail, I hadn't fully read  ./configure --help :
> all external package options are listed there, including HDF5
>
> As far as I can see in
> https://www.mcs.anl.gov/petsc/miscellaneous/external.html and on the
> PDF manual, not all external packages are mentioned, and this tripped
> me initially.
>
> So my question becomes: please synchronize the output of ./configure
> --help with manpages and pdf manual :)
>

Done.

https://bitbucket.org/petsc/petsc/commits/b6541ed63645a657daaf31a0efc9fb29a825bfaf

   Matt


> Thanks again,
> Marco
>
>
> On 11 July 2016 at 09:57, Marco Zocca  wrote:
> > Good morning,
> >
> >Does the HDF5 functionality need to be explicitly requested at
> > configure time? I just noticed that my default configuration on a
> > single-node machine does not compile any relevant symbol.
> >
> > I do not have HDF5 installed on my system yet, but I assumed PETSc
> > includes it by default, or automagically pulls the dependency in at
> > config time, since the manual doesn't mention anything about it. Do I
> > have to install HDF5 from source and rebuild PETSc then?
> >
> > Thanks in advance,
> > Marco
> >
> >
> >
> > --- config options and architecture :
> >
> > Configure Options: --configModules=PETSc.Configure
> > --optionsModule=config.compilerOptions --with-cc=gcc --with-cxx=g++
> > --with-fc=gfortran --download-fblaslapack --download-mpich
> > Working directory: /Users/ocramz/petsc-3.7.2
> > Machine platform:
> > ('Darwin', 'fermi.local', '13.4.0', 'Darwin Kernel Version 13.4.0: Sun
> > Aug 17 19:50:11 PDT 2014; root:xnu-2422.115.4~1/RELEASE_X86_64',
> > 'x86_64', 'i386')
> > Python version:
> > 2.7.5 (default, Mar  9 2014, 22:15:05)
> > [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)]
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener


Re: [petsc-users] [Slepc 3.7.1][macOS] install name is set to build folder instead of prefix

2016-07-11 Thread Jose E. Roman
I cannot reproduce this behaviour. If I do for instance this (on OS X El 
Capitan):

$ cd ~/tmp
$ ln -s $SLEPC_DIR .
$ cd slepc-3.7.1
$ ./configure
$ make
$ otool -lv $PETSC_ARCH/lib/libslepc.dylib | grep slepc

I don't get a warning, and the output of otool is the same that would result if 
done on $SLEPC_DIR.
Which warning are you getting?

Jose


> El 11 jul 2016, a las 0:48, Denis Davydov  escribió:
> 
> Hi Jose,
> 
> so here is what happens. The issue appears when SLEPC_DIR is set to a symlink 
> (the one with “stage below) of a build folder (the one with “private” below). 
> During configure there is a warning that SLEPC_DIR is not the same as current 
> dir (string comparison),
> but one is symlink of another, so all but install_name_tool work. The latter 
> leads to the following values of variables:
> 
> oldname
> =/private/var/folders/5k/sqpp24tx3ylds4fgm13pfht0gn/T/davydden/spack-stage/spack-stage-MziaMV/slepc-3.7.1/installed-arch-darwin-c-opt/lib/libslepc.3.7.dylib
> 
> installName=/private/var/folders/5k/sqpp24tx3ylds4fgm13pfht0gn/T/davydden/spack-stage/spack-stage-MziaMV/slepc-3.7.1/installed-arch-darwin-c-opt/lib/libslepc.3.7.dylib
> 
> archDir
> =/Users/davydden/spack/var/spack/stage/slepc-3.7.1-gimrzhb4mozeus3i2hdmrtjp3tha5pgr/slepc-3.7.1/installed-arch-darwin-c-opt
> 
> installDir 
> =/Users/davydden/spack/opt/spack/darwin-elcapitan-x86_64/clang-7.3.0-apple/slepc-3.7.1-gimrzhb4mozeus3i2hdmrtjp3tha5pgr
> 
> dst
> =/Users/davydden/spack/opt/spack/darwin-elcapitan-x86_64/clang-7.3.0-apple/slepc-3.7.1-gimrzhb4mozeus3i2hdmrtjp3tha5pgr/lib/libslepc.3.7.1.dylib
> 
> As you see, installName wasn’t changed from oldname.
> 
> Since the python code rely on SLEPC_DIR be pwd(), i would suggest to through 
> an error instead of the warning to make
> sure that users won’t get in the situation above. Alternative is to make this 
> part of the code more robust.
> 
> When SLEPC_DIR==pwd() the patch you referred works.
> 
> Kind regards,
> Denis 
> 



Re: [petsc-users] HDF5 and PETSc

2016-07-11 Thread Marco Zocca
Sorry for the previous mail, I hadn't fully read  ./configure --help :
all external package options are listed there, including HDF5

As far as I can see in
https://www.mcs.anl.gov/petsc/miscellaneous/external.html and on the
PDF manual, not all external packages are mentioned, and this tripped
me initially.

So my question becomes: please synchronize the output of ./configure
--help with manpages and pdf manual :)

Thanks again,
Marco


On 11 July 2016 at 09:57, Marco Zocca  wrote:
> Good morning,
>
>Does the HDF5 functionality need to be explicitly requested at
> configure time? I just noticed that my default configuration on a
> single-node machine does not compile any relevant symbol.
>
> I do not have HDF5 installed on my system yet, but I assumed PETSc
> includes it by default, or automagically pulls the dependency in at
> config time, since the manual doesn't mention anything about it. Do I
> have to install HDF5 from source and rebuild PETSc then?
>
> Thanks in advance,
> Marco
>
>
>
> --- config options and architecture :
>
> Configure Options: --configModules=PETSc.Configure
> --optionsModule=config.compilerOptions --with-cc=gcc --with-cxx=g++
> --with-fc=gfortran --download-fblaslapack --download-mpich
> Working directory: /Users/ocramz/petsc-3.7.2
> Machine platform:
> ('Darwin', 'fermi.local', '13.4.0', 'Darwin Kernel Version 13.4.0: Sun
> Aug 17 19:50:11 PDT 2014; root:xnu-2422.115.4~1/RELEASE_X86_64',
> 'x86_64', 'i386')
> Python version:
> 2.7.5 (default, Mar  9 2014, 22:15:05)
> [GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)]