[petsc-users] Logo?

2017-08-24 Thread Mohammad Mirzadeh
Hey folks,

Does PETSc have any sort of official logo? I often like to include these in
my acknowledgment slides but can't find any for PETSc. I'd appreciate if
you could point me to if there is one.

Mohammad


Re: [petsc-users] Petsc with Windows

2016-11-30 Thread Mohammad Mirzadeh
May I propose docker as an alternative approach? https://www.docker.com

There are already petsc images and creating your own environment is not
that hard. As a bonus you get a cross platform solution ... unless your
application is windows specific in which case docker might not be the best
way to go.

On Wed, Nov 30, 2016 at 1:45 AM Jed Brown  wrote:

> Elaine Tang  writes:
>
> > I am developing some software on windows that would like to utilize petsc
> > library. Currently I have petsc library configured on cygwin on my
> windows
> > machine.
>
> This is probably a better choice than Cygwin going forward.
>
> https://msdn.microsoft.com/en-us/commandline/wsl/about
>
> I don't know to what extent PETSc users have experimented with this
> feature, but it should make it easier to build and distribute PETSc.
>
> > Is there any binary of petsc for windows so that the software that I
> > develop will be more portable and can be run on other windows machine as
> > well?
>
> Are you developing an application or a library?
>
-- 
Sent from Gmail Mobile


Re: [petsc-users] issue with NullSpaceRemove in parallel

2016-10-06 Thread Mohammad Mirzadeh
Thanks Barry. That seems to have fixed it; I had a NAN somewhere in the RHS.

On Wed, Oct 5, 2016 at 11:18 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:

>
>   The message "Scalar value must be same on all processes, argument # 2"
> comes up often when a Nan or Inf as gotten into the computation. The IEEE
> standard for floating point operations defines that Nan != Nan;
>
>   I recommend running again with -fp_trap this should cause the code to
> stop with an error message as soon as the Nan or Inf is generated.
>
>   Barry
>
>
>
>
> > On Oct 5, 2016, at 9:21 PM, Mohammad Mirzadeh <mirza...@gmail.com>
> wrote:
> >
> > Hi folks,
> >
> > I am trying to track down a bug that is sometimes triggered when solving
> a singular system (poisson+neumann). It only seems to happen in parallel
> and halfway through the run. I can provide detailed information about the
> actual problem, but the error message I get boils down to this:
> >
> > [0]PETSC ERROR: - Error Message
> --
> > [0]PETSC ERROR: Invalid argument
> > [0]PETSC ERROR: Scalar value must be same on all processes, argument # 2
> > [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
> for trouble shooting.
> > [0]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015
> > [0]PETSC ERROR: ./two_fluid_2d on a linux named bazantserver1 by
> mohammad Wed Oct  5 21:14:47 2016
> > [0]PETSC ERROR: Configure options PETSC_ARCH=linux --prefix=/usr/local
> --with-clanguage=cxx --with-c-support --with-shared-libraries
> --download-hypre --download-metis --download-parmetis --download-ml
> --download-superlu_dist COPTFLAGS=" -O3 -march=native" CXXOPTFLAGS=" -O3
> -march=native" FOPTFLAGS=" -O3 -march=native"
> > [0]PETSC ERROR: #1 VecShift() line 1480 in /tmp/petsc-3.6.3/src/vec/vec/
> utils/vinv.c
> > [0]PETSC ERROR: #2 MatNullSpaceRemove() line 348 in
> /tmp/petsc-3.6.3/src/mat/interface/matnull.c
> > [0]PETSC ERROR: #3 KSP_RemoveNullSpace() line 207 in
> /tmp/petsc-3.6.3/include/petsc/private/kspimpl.h
> > [0]PETSC ERROR: #4 KSP_PCApply() line 243 in /tmp/petsc-3.6.3/include/
> petsc/private/kspimpl.h
> > [0]PETSC ERROR: #5 KSPInitialResidual() line 63 in
> /tmp/petsc-3.6.3/src/ksp/ksp/interface/itres.c
> > [0]PETSC ERROR: #6 KSPSolve_BCGS() line 50 in
> /tmp/petsc-3.6.3/src/ksp/ksp/impls/bcgs/bcgs.c
> > [0]PETSC ERROR: #7 KSPSolve() line 604 in /tmp/petsc-3.6.3/src/ksp/ksp/
> interface/itfunc.c
> >
> > I understand this is somewhat vague question, but any idea what could
> cause this sort of problem? This was on 2 processors. The same code runs
> fine on a single processor. Also the solution seems to converge fine on
> previous iterations, e.g. this is the convergence info from the last
> iteration before the code breaks:
> >
> >   0 KSP preconditioned resid norm 6.814085878146e+01 true resid norm
> 2.885308600701e+00 ||r(i)||/||b|| 1.e+00
> >   1 KSP preconditioned resid norm 3.067319980814e-01 true resid norm
> 8.480307326867e-02 ||r(i)||/||b|| 2.939133555699e-02
> >   2 KSP preconditioned resid norm 1.526405979843e-03 true resid norm
> 1.125228519827e-03 ||r(i)||/||b|| 3.899855008762e-04
> >   3 KSP preconditioned resid norm 2.199423175998e-05 true resid norm
> 4.232832916628e-05 ||r(i)||/||b|| 1.467029528695e-05
> >   4 KSP preconditioned resid norm 5.382291463582e-07 true resid norm
> 8.438732856334e-07 ||r(i)||/||b|| 2.924724535283e-07
> >   5 KSP preconditioned resid norm 9.495525177398e-09 true resid norm
> 1.408250768598e-08 ||r(i)||/||b|| 4.880763077669e-09
> >   6 KSP preconditioned resid norm 9.249233376169e-11 true resid norm
> 2.795840275267e-10 ||r(i)||/||b|| 9.689917655907e-11
> >   7 KSP preconditioned resid norm 1.138293762641e-12 true resid norm
> 2.559058680281e-12 ||r(i)||/||b|| 8.869272006674e-13
> >
> > Also, if it matters, this is using hypre as PC and bcgs as KSP.
> >
> > Thanks
>
>


[petsc-users] issue with NullSpaceRemove in parallel

2016-10-05 Thread Mohammad Mirzadeh
Hi folks,

I am trying to track down a bug that is sometimes triggered when solving a
singular system (poisson+neumann). It only seems to happen in parallel and
halfway through the run. I can provide detailed information about the
actual problem, but the error message I get boils down to this:

[0]PETSC ERROR: - Error Message
--

[0]PETSC ERROR: Invalid argument

[0]PETSC ERROR: Scalar value must be same on all processes, argument # 2

[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for
trouble shooting.

[0]PETSC ERROR: Petsc Release Version 3.6.3, Dec, 03, 2015

[0]PETSC ERROR: ./two_fluid_2d on a linux named bazantserver1 by mohammad
Wed Oct  5 21:14:47 2016

[0]PETSC ERROR: Configure options PETSC_ARCH=linux --prefix=/usr/local
--with-clanguage=cxx --with-c-support --with-shared-libraries
--download-hypre --download-metis --download-parmetis --download-ml
--download-superlu_dist COPTFLAGS=" -O3 -march=native" CXXOPTFLAGS=" -O3
-march=native" FOPTFLAGS=" -O3 -march=native"

[0]PETSC ERROR: #1 VecShift() line 1480 in
/tmp/petsc-3.6.3/src/vec/vec/utils/vinv.c

[0]PETSC ERROR: #2 MatNullSpaceRemove() line 348 in
/tmp/petsc-3.6.3/src/mat/interface/matnull.c

[0]PETSC ERROR: #3 KSP_RemoveNullSpace() line 207 in
/tmp/petsc-3.6.3/include/petsc/private/kspimpl.h

[0]PETSC ERROR: #4 KSP_PCApply() line 243 in
/tmp/petsc-3.6.3/include/petsc/private/kspimpl.h

[0]PETSC ERROR: #5 KSPInitialResidual() line 63 in
/tmp/petsc-3.6.3/src/ksp/ksp/interface/itres.c

[0]PETSC ERROR: #6 KSPSolve_BCGS() line 50 in
/tmp/petsc-3.6.3/src/ksp/ksp/impls/bcgs/bcgs.c

[0]PETSC ERROR: #7 KSPSolve() line 604 in
/tmp/petsc-3.6.3/src/ksp/ksp/interface/itfunc.c

I understand this is somewhat vague question, but any idea what could cause
this sort of problem? This was on 2 processors. The same code runs fine on
a single processor. Also the solution seems to converge fine on previous
iterations, e.g. this is the convergence info from the last iteration
before the code breaks:

  0 KSP preconditioned resid norm 6.814085878146e+01 true resid norm
2.885308600701e+00 ||r(i)||/||b|| 1.e+00

  1 KSP preconditioned resid norm 3.067319980814e-01 true resid norm
8.480307326867e-02 ||r(i)||/||b|| 2.939133555699e-02

  2 KSP preconditioned resid norm 1.526405979843e-03 true resid norm
1.125228519827e-03 ||r(i)||/||b|| 3.899855008762e-04

  3 KSP preconditioned resid norm 2.199423175998e-05 true resid norm
4.232832916628e-05 ||r(i)||/||b|| 1.467029528695e-05

  4 KSP preconditioned resid norm 5.382291463582e-07 true resid norm
8.438732856334e-07 ||r(i)||/||b|| 2.924724535283e-07

  5 KSP preconditioned resid norm 9.495525177398e-09 true resid norm
1.408250768598e-08 ||r(i)||/||b|| 4.880763077669e-09

  6 KSP preconditioned resid norm 9.249233376169e-11 true resid norm
2.795840275267e-10 ||r(i)||/||b|| 9.689917655907e-11

  7 KSP preconditioned resid norm 1.138293762641e-12 true resid norm
2.559058680281e-12 ||r(i)||/||b|| 8.869272006674e-13

Also, if it matters, this is using hypre as PC and bcgs as KSP.

Thanks


Re: [petsc-users] Changing DM domain from default [0,1]

2016-08-11 Thread Mohammad Mirzadeh
Have you tried DMDASetUniformCoordinates?
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMDASetUniformCoordinates.html

On Thu, Aug 11, 2016 at 8:41 PM, Scott Dossa  wrote:

> Hi All,
>
> Basic Question:
>
> When one creates a DMDA to handle objects, it sets the domain to [0,1] by
> default. Is there a call/function to change this?
>
> All the examples seem to be over the default domain.
>
> Thank you for the help!
> Best,
> Scott Dossa
>


[petsc-users] Docker performance

2016-08-10 Thread Mohammad Mirzadeh
I have been recently hearing about Docker [1] and its potential as a
"platform-independent" environment for application development. At first it
sounded just like another VM to me until I came across this IBM Research
paper [2] and this SO post [3] (to be fair I still don't get the detailed
differences ... )

Whats exciting about paper, is that they report quite impressive
performance metrics for docker, e.g. for LINPACK and stream tests. So I
just tried installing PETSc in a ubuntu image for myself tonight and I got
relatively close performance for a toy example (2D poisson) compared to my
native development setup (OS X) (~%5-10 loss - although I was not using the
same petsc, mpi, etc).

So I thought to share my excitement with other users who might find this
useful. Personally I don't see myself using docker a whole lot; not now
anyway, mostly because after years of suffering on OS X, I think homebrew
has reached a mature point :).

I would love to hear if anyone has more interesting stories to share! Also,
perhaps having a standard petsc container could be beneficial, specially to
new users? Anyway, food for thought!

Cheers,
Mohammad

[1]: https://www.docker.com/what-docker
[2]:
http://domino.research.ibm.com/library/cyberdig.nsf/papers/0929052195DD819C85257D2300681E7B/$File/rc25482.pdf
[3]:
http://stackoverflow.com/questions/16047306/how-is-docker-different-from-a-normal-virtual-machine


Re: [petsc-users] false-positive leak report in log_view?

2016-08-03 Thread Mohammad Mirzadeh
OK so I just ran the example under valgrind, and if I use two VecViews, it
complains about following leak:

==66838== 24,802 (544 direct, 24,258 indirect) bytes in 1 blocks are
definitely lost in loss record 924 of 926
==66838==at 0x19EBB: malloc (in
/usr/local/Cellar/valgrind/3.11.0/lib/valgrind/vgpreload_memcheck-amd64-darwin.so)
==66838==by 0x10005E638: PetscMallocAlign (in
/usr/local/Cellar/petsc/3.7.2/real/lib/libpetsc.3.7.2.dylib)
==66838==by 0x100405F00: DMCreate_DA (in
/usr/local/Cellar/petsc/3.7.2/real/lib/libpetsc.3.7.2.dylib)
==66838==by 0x1003CFFA4: DMSetType (in
/usr/local/Cellar/petsc/3.7.2/real/lib/libpetsc.3.7.2.dylib)
==66838==by 0x100405B7F: DMDACreate (in
/usr/local/Cellar/petsc/3.7.2/real/lib/libpetsc.3.7.2.dylib)
==66838==by 0x1003F825F: DMDACreate2d (in
/usr/local/Cellar/petsc/3.7.2/real/lib/libpetsc.3.7.2.dylib)
==66838==by 0x11D89: main (main_test.cpp:7)

By I am destroying the dm ... also I dont get this when using a single
VecView. As a bonus info, PETSC_VIEWER_STDOUT_WORLD is just fine, so this
looks like it is definitely vtk related.

On Wed, Aug 3, 2016 at 2:05 PM, Mohammad Mirzadeh <mirza...@gmail.com>
wrote:

> On Wed, Aug 3, 2016 at 10:59 AM, Matthew Knepley <knep...@gmail.com>
> wrote:
>
>> On Tue, Aug 2, 2016 at 12:40 PM, Mohammad Mirzadeh <mirza...@gmail.com>
>> wrote:
>>
>>> I often use the memory usage information in log_view as a way to check
>>> on memory leaks and so far it has worked perfect. However, I had long
>>> noticed a false-positive report in memory leak for Viewers, i.e.
>>> destruction count is always one less than creation.
>>>
>>
>> Yes, I believe that is the Viewer being used to print this information.
>>
>
> That makes sense.
>
>>
>>
>>> Today, I noticed what seems to be a second one. If you use VecView to
>>> write the same DA to vtk, i.e. call VecView(A, vtk); twice, it also report
>>> a memory leak for vectors, vecscatters, dm, etc. I am calling this a
>>> false-positive since the code is valgrind-clean.
>>>
>>> Is this known/expected?
>>>
>>
>> The VTK viewers have to hold everything they output until they are
>> destroyed since the format does not allow immediate writing.
>> I think the VTK viewer is not destroyed at the time of this output. Can
>> you make a small example that does this?
>>
>
> Here's a small example that illustrates the issues
>
> #include 
>
>
> int main(int argc, char *argv[]) {
>
>   PetscInitialize(, , NULL, NULL);
>
>
>   DM dm;
>
>   DMDACreate2d(PETSC_COMM_WORLD, DM_BOUNDARY_NONE, DM_BOUNDARY_NONE, 
> DMDA_STENCIL_BOX,
>
>-10, -10, PETSC_DECIDE, PETSC_DECIDE, 1, 1, NULL, NULL,
>
>);
>
> //  DMDASetUniformCoordinates(dm, -1, 1, -1, 1, 0, 0);
>
>
>   Vec sol;
>
>   DMCreateGlobalVector(dm, );
>
>   VecSet(sol, 0);
>
>
>   PetscViewer vtk;
>
>   PetscViewerVTKOpen(PETSC_COMM_WORLD, "test.vts", FILE_MODE_WRITE, );
>
>   VecView(sol, vtk);
>
> //  VecView(sol, vtk);
>
>   PetscViewerDestroy();
>
>
>   DMDestroy();
>
>   VecDestroy();
>
>
>   PetscFinalize();
>
>   return 0;
>
> }
>
>
> If you uncomment the second VecView you get reports for leaks in
> VecScatter and dm. If you also uncomment the DMDASetUniformCoordinates, and
> use both VecViews, you also get a leak report for Vecs ... its quite
> bizarre ...
>
>
>> I have switched to HDF5 and XDMF due to the limitations of VTK format.
>>
>>
> I had used XDMF + raw binary in the past and was satisfied with the
> result. Do you write a single XDMF as a "post-processing" step when the
> simulation is finished? If I remember correctly preview could not open xmf
> files as time-series.
>
>>   Thanks,
>>
>>  Matt
>>
>>
>>> Here's the relevant bit from log_view:
>>>
>>> --- Event Stage 0: Main Stage
>>>
>>>   Vector 8  7   250992 0.
>>>   Vector Scatter 2  00 0.
>>> Distributed Mesh 2  00 0.
>>> Star Forest Bipartite Graph 4  00 0.
>>>  Discrete System 2  00 0.
>>>Index Set 4  483136 0.
>>>IS L to G Mapping 2  00 0.
>>>   Viewer 2  1  784 0.
>>>
>>> 
>>>
>>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>
>


Re: [petsc-users] false-positive leak report in log_view?

2016-08-03 Thread Mohammad Mirzadeh
On Wed, Aug 3, 2016 at 10:59 AM, Matthew Knepley <knep...@gmail.com> wrote:

> On Tue, Aug 2, 2016 at 12:40 PM, Mohammad Mirzadeh <mirza...@gmail.com>
> wrote:
>
>> I often use the memory usage information in log_view as a way to check on
>> memory leaks and so far it has worked perfect. However, I had long noticed
>> a false-positive report in memory leak for Viewers, i.e. destruction count
>> is always one less than creation.
>>
>
> Yes, I believe that is the Viewer being used to print this information.
>

That makes sense.

>
>
>> Today, I noticed what seems to be a second one. If you use VecView to
>> write the same DA to vtk, i.e. call VecView(A, vtk); twice, it also report
>> a memory leak for vectors, vecscatters, dm, etc. I am calling this a
>> false-positive since the code is valgrind-clean.
>>
>> Is this known/expected?
>>
>
> The VTK viewers have to hold everything they output until they are
> destroyed since the format does not allow immediate writing.
> I think the VTK viewer is not destroyed at the time of this output. Can
> you make a small example that does this?
>

Here's a small example that illustrates the issues

#include 


int main(int argc, char *argv[]) {

  PetscInitialize(, , NULL, NULL);


  DM dm;

  DMDACreate2d(PETSC_COMM_WORLD, DM_BOUNDARY_NONE, DM_BOUNDARY_NONE,
DMDA_STENCIL_BOX,

   -10, -10, PETSC_DECIDE, PETSC_DECIDE, 1, 1, NULL, NULL,

   );

//  DMDASetUniformCoordinates(dm, -1, 1, -1, 1, 0, 0);


  Vec sol;

  DMCreateGlobalVector(dm, );

  VecSet(sol, 0);


  PetscViewer vtk;

  PetscViewerVTKOpen(PETSC_COMM_WORLD, "test.vts", FILE_MODE_WRITE, );

  VecView(sol, vtk);

//  VecView(sol, vtk);

  PetscViewerDestroy();


  DMDestroy();

  VecDestroy();


  PetscFinalize();

  return 0;

}


If you uncomment the second VecView you get reports for leaks in VecScatter
and dm. If you also uncomment the DMDASetUniformCoordinates, and use both
VecViews, you also get a leak report for Vecs ... its quite bizarre ...


> I have switched to HDF5 and XDMF due to the limitations of VTK format.
>
>
I had used XDMF + raw binary in the past and was satisfied with the result.
Do you write a single XDMF as a "post-processing" step when the simulation
is finished? If I remember correctly preview could not open xmf files as
time-series.

>   Thanks,
>
>  Matt
>
>
>> Here's the relevant bit from log_view:
>>
>> --- Event Stage 0: Main Stage
>>
>>   Vector 8  7   250992 0.
>>   Vector Scatter 2  00 0.
>> Distributed Mesh 2  00 0.
>> Star Forest Bipartite Graph 4  00 0.
>>  Discrete System 2  00 0.
>>Index Set 4  483136 0.
>>IS L to G Mapping 2  00 0.
>>   Viewer 2  1  784 0.
>>
>> 
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>


[petsc-users] false-positive leak report in log_view?

2016-08-02 Thread Mohammad Mirzadeh
I often use the memory usage information in log_view as a way to check on
memory leaks and so far it has worked perfect. However, I had long noticed
a false-positive report in memory leak for Viewers, i.e. destruction count
is always one less than creation.

Today, I noticed what seems to be a second one. If you use VecView to write
the same DA to vtk, i.e. call VecView(A, vtk); twice, it also report a
memory leak for vectors, vecscatters, dm, etc. I am calling this a
false-positive since the code is valgrind-clean.

Is this known/expected?

Here's the relevant bit from log_view:

--- Event Stage 0: Main Stage

  Vector 8  7   250992 0.
  Vector Scatter 2  00 0.
Distributed Mesh 2  00 0.
Star Forest Bipartite Graph 4  00 0.
 Discrete System 2  00 0.
   Index Set 4  483136 0.
   IS L to G Mapping 2  00 0.
  Viewer 2  1  784 0.



Re: [petsc-users] libtools mismatch on osx

2016-03-19 Thread Mohammad Mirzadeh
Mark,

I remember having similar libtool-related issues before when configuring
p4est on OS X. The way I ended up fixing it is using libtool from homebrew.
Just a note that homebrew creates 'glibtool' and 'glibtoolize' to avoid
conflict with Apple.

As a side note p4est itself (along with petsc and many of useful external
packages) are also available through homebrew/science tap as binaries.

Hope this helps.

Mohammad

On Thu, Mar 17, 2016 at 9:56 AM, Mark Adams  wrote:

> I manually installed libtools yesterday and all seemed well but I cloned a
> new PETSc just now and now get this error.
>


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Mohammad Mirzadeh
On Tue, Mar 1, 2016 at 3:03 PM, Boyce Griffith <griff...@cims.nyu.edu>
wrote:

>
> On Mar 1, 2016, at 2:41 PM, Mohammad Mirzadeh <mirza...@gmail.com> wrote:
>
>
>
> On Tue, Mar 1, 2016 at 2:07 PM, Boyce Griffith <griff...@cims.nyu.edu>
> wrote:
>
>>
>> On Mar 1, 2016, at 12:06 PM, Mohammad Mirzadeh <mirza...@gmail.com>
>> wrote:
>>
>> Nice discussion.
>>
>>
>> On Tue, Mar 1, 2016 at 10:16 AM, Boyce Griffith <griff...@cims.nyu.edu>
>> wrote:
>>
>>>
>>> On Mar 1, 2016, at 9:59 AM, Mark Adams <mfad...@lbl.gov> wrote:
>>>
>>>
>>>
>>> On Mon, Feb 29, 2016 at 5:42 PM, Boyce Griffith <griff...@cims.nyu.edu>
>>> wrote:
>>>
>>>>
>>>> On Feb 29, 2016, at 5:36 PM, Mark Adams <mfad...@lbl.gov> wrote:
>>>>
>>>>
>>>>>> GAMG is use for AMR problems like this a lot in BISICLES.
>>>>>>
>>>>>
>>>>> Thanks for the reference. However, a quick look at their paper
>>>>> suggests they are using a finite volume discretization which should be
>>>>> symmetric and avoid all the shenanigans I'm going through!
>>>>>
>>>>
>>>> No, they are not symmetric.  FV is even worse than vertex centered
>>>> methods.  The BCs and the C-F interfaces add non-symmetry.
>>>>
>>>>
>>>> If you use a different discretization, it is possible to make the c-f
>>>> interface discretization symmetric --- but symmetry appears to come at a
>>>> cost of the reduction in the formal order of accuracy in the flux along the
>>>> c-f interface. I can probably dig up some code that would make it easy to
>>>> compare.
>>>>
>>>
>>> I don't know.  Chombo/Boxlib have a stencil for C-F and do F-C with
>>> refluxing, which I do not linearize.  PETSc sums fluxes at faces directly,
>>> perhaps this IS symmetric? Toby might know.
>>>
>>>
>>> If you are talking about solving Poisson on a composite grid, then
>>> refluxing and summing up fluxes are probably the same procedure.
>>>
>>
>> I am not familiar with the terminology used here. What does the refluxing
>> mean?
>>
>>
>>>
>>> Users of these kinds of discretizations usually want to use the
>>> conservative divergence at coarse-fine interfaces, and so the main question
>>> is how to set up the viscous/diffusive flux stencil at coarse-fine
>>> interfaces (or, equivalently, the stencil for evaluating ghost cell values
>>> at coarse-fine interfaces). It is possible to make the overall
>>> discretization symmetric if you use a particular stencil for the flux
>>> computation. I think this paper (
>>> http://www.ams.org/journals/mcom/1991-56-194/S0025-5718-1991-1066831-5/S0025-5718-1991-1066831-5.pdf)
>>> is one place to look. (This stuff is related to "mimetic finite difference"
>>> discretizations of Poisson.) This coarse-fine interface discretization
>>> winds up being symmetric (although possibly only w.r.t. a weighted inner
>>> product --- I can't remember the details), but the fluxes are only
>>> first-order accurate at coarse-fine interfaces.
>>>
>>>
>> Right. I think if the discretization is conservative, i.e. discretizing
>> div of grad, and is compact, i.e. only involves neighboring cells sharing a
>> common face, then it is possible to construct symmetric discretization. An
>> example, that I have used before in other contexts, is described here:
>> http://physbam.stanford.edu/~fedkiw/papers/stanford2004-02.pdf
>>
>> An interesting observation is although the fluxes are only first order
>> accurate, the final solution to the linear system exhibits super
>> convergence, i.e. second-order accurate, even in L_inf. Similar behavior is
>> observed with non-conservative, node-based finite difference
>> discretizations.
>>
>>
>> I don't know about that --- check out Table 1 in the paper you cite,
>> which seems to indicate first-order convergence in all norms.
>>
>
> Sorry my bad. That was the original work which was later extended in
> doi:10.1016/j.compfluid.2005.01.006
> <http://dx.doi.org/10.1016/j.compfluid.2005.01.006> to second order (c.f.
> section 3.3) by using flux weighting in the traverse direction.
>
>
> I don't follow the argument about why it is a bad thing for the fine
> fluxes to have different values than the overlying coarse flux, bu

Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Mohammad Mirzadeh
On Tue, Mar 1, 2016 at 1:15 PM, Jed Brown <j...@jedbrown.org> wrote:

> Mohammad Mirzadeh <mirza...@gmail.com> writes:
>
> > I am not familiar with the terminology used here. What does the refluxing
> > mean?
>
> The Chombo/BoxLib family of methods evaluate fluxes between coarse grid
> cells overlaying refined grids, then later visit the fine grids and
> reevaluate those fluxes.  The correction needs to be propagated back to
> the adjoining coarse grid cell to maintain conservation.  It's an
> implementation detail that they call refluxing.
>

Thanks for clarification.


>
> > Right. I think if the discretization is conservative, i.e. discretizing
> div
> > of grad, and is compact, i.e. only involves neighboring cells sharing a
> > common face, then it is possible to construct symmetric discretization.
> An
> > example, that I have used before in other contexts, is described here:
> > http://physbam.stanford.edu/~fedkiw/papers/stanford2004-02.pdf
>
> It's unfortunate that this paper repeats some unfounded multigrid
> slander and then basically claims to have uniform convergence using
> incomplete Cholesky with CG.  In reality, incomplete Cholesky is
> asymptotically no better than Jacobi.
>
> > An interesting observation is although the fluxes are only first order
> > accurate, the final solution to the linear system exhibits super
> > convergence, i.e. second-order accurate, even in L_inf.
>
> Perhaps for aligned coefficients; definitely not for unaligned
> coefficients.
>

Could you elaborate what you mean by aligned/unaligned coefficients? Do you
mean anisotropic diffusion coefficient?


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Mohammad Mirzadeh
On Tue, Mar 1, 2016 at 2:07 PM, Boyce Griffith <griff...@cims.nyu.edu>
wrote:

>
> On Mar 1, 2016, at 12:06 PM, Mohammad Mirzadeh <mirza...@gmail.com> wrote:
>
> Nice discussion.
>
>
> On Tue, Mar 1, 2016 at 10:16 AM, Boyce Griffith <griff...@cims.nyu.edu>
> wrote:
>
>>
>> On Mar 1, 2016, at 9:59 AM, Mark Adams <mfad...@lbl.gov> wrote:
>>
>>
>>
>> On Mon, Feb 29, 2016 at 5:42 PM, Boyce Griffith <griff...@cims.nyu.edu>
>> wrote:
>>
>>>
>>> On Feb 29, 2016, at 5:36 PM, Mark Adams <mfad...@lbl.gov> wrote:
>>>
>>>
>>>>> GAMG is use for AMR problems like this a lot in BISICLES.
>>>>>
>>>>
>>>> Thanks for the reference. However, a quick look at their paper suggests
>>>> they are using a finite volume discretization which should be symmetric and
>>>> avoid all the shenanigans I'm going through!
>>>>
>>>
>>> No, they are not symmetric.  FV is even worse than vertex centered
>>> methods.  The BCs and the C-F interfaces add non-symmetry.
>>>
>>>
>>> If you use a different discretization, it is possible to make the c-f
>>> interface discretization symmetric --- but symmetry appears to come at a
>>> cost of the reduction in the formal order of accuracy in the flux along the
>>> c-f interface. I can probably dig up some code that would make it easy to
>>> compare.
>>>
>>
>> I don't know.  Chombo/Boxlib have a stencil for C-F and do F-C with
>> refluxing, which I do not linearize.  PETSc sums fluxes at faces directly,
>> perhaps this IS symmetric? Toby might know.
>>
>>
>> If you are talking about solving Poisson on a composite grid, then
>> refluxing and summing up fluxes are probably the same procedure.
>>
>
> I am not familiar with the terminology used here. What does the refluxing
> mean?
>
>
>>
>> Users of these kinds of discretizations usually want to use the
>> conservative divergence at coarse-fine interfaces, and so the main question
>> is how to set up the viscous/diffusive flux stencil at coarse-fine
>> interfaces (or, equivalently, the stencil for evaluating ghost cell values
>> at coarse-fine interfaces). It is possible to make the overall
>> discretization symmetric if you use a particular stencil for the flux
>> computation. I think this paper (
>> http://www.ams.org/journals/mcom/1991-56-194/S0025-5718-1991-1066831-5/S0025-5718-1991-1066831-5.pdf)
>> is one place to look. (This stuff is related to "mimetic finite difference"
>> discretizations of Poisson.) This coarse-fine interface discretization
>> winds up being symmetric (although possibly only w.r.t. a weighted inner
>> product --- I can't remember the details), but the fluxes are only
>> first-order accurate at coarse-fine interfaces.
>>
>>
> Right. I think if the discretization is conservative, i.e. discretizing
> div of grad, and is compact, i.e. only involves neighboring cells sharing a
> common face, then it is possible to construct symmetric discretization. An
> example, that I have used before in other contexts, is described here:
> http://physbam.stanford.edu/~fedkiw/papers/stanford2004-02.pdf
>
> An interesting observation is although the fluxes are only first order
> accurate, the final solution to the linear system exhibits super
> convergence, i.e. second-order accurate, even in L_inf. Similar behavior is
> observed with non-conservative, node-based finite difference
> discretizations.
>
>
> I don't know about that --- check out Table 1 in the paper you cite, which
> seems to indicate first-order convergence in all norms.
>

Sorry my bad. That was the original work which was later extended in
doi:10.1016/j.compfluid.2005.01.006
<http://dx.doi.org/10.1016/j.compfluid.2005.01.006> to second order (c.f.
section 3.3) by using flux weighting in the traverse direction.


>
> The symmetric discretization in the Ewing paper is only slightly more
> complicated, but will give full 2nd-order accuracy in L-1 (and maybe also
> L-2 and L-infinity). One way to think about it is that you are using simple
> linear interpolation at coarse-fine interfaces (3-point interpolation in
> 2D, 4-point interpolation in 3D) using a stencil that is symmetric with
> respect to the center of the coarse grid cell.
>
>
I'll look into that paper. One can never get enough of ideas for C-F
treatments in AMR applications :).


> A (discrete) Green's functions argument explains why one gets higher-order
> convergence despite localized reductions in accuracy along the coarse-fine
> interface --- it has to do with the fact that errors from individual grid
> locations do not have that large of an effect on the solution, and these
> c-f interface errors are concentrated along on a lower dimensional surface
> in the domain.
>

This intuitively makes sense and in fact when you plot the error, you do
see spikes at the C-F interfaces. Do you know of a resource that does a
rigorous analysis of the C-F treatment on the solution error?


>
> -- Boyce
>


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-03-01 Thread Mohammad Mirzadeh
Nice discussion.


On Tue, Mar 1, 2016 at 10:16 AM, Boyce Griffith 
wrote:

>
> On Mar 1, 2016, at 9:59 AM, Mark Adams  wrote:
>
>
>
> On Mon, Feb 29, 2016 at 5:42 PM, Boyce Griffith 
> wrote:
>
>>
>> On Feb 29, 2016, at 5:36 PM, Mark Adams  wrote:
>>
>>
 GAMG is use for AMR problems like this a lot in BISICLES.

>>>
>>> Thanks for the reference. However, a quick look at their paper suggests
>>> they are using a finite volume discretization which should be symmetric and
>>> avoid all the shenanigans I'm going through!
>>>
>>
>> No, they are not symmetric.  FV is even worse than vertex centered
>> methods.  The BCs and the C-F interfaces add non-symmetry.
>>
>>
>> If you use a different discretization, it is possible to make the c-f
>> interface discretization symmetric --- but symmetry appears to come at a
>> cost of the reduction in the formal order of accuracy in the flux along the
>> c-f interface. I can probably dig up some code that would make it easy to
>> compare.
>>
>
> I don't know.  Chombo/Boxlib have a stencil for C-F and do F-C with
> refluxing, which I do not linearize.  PETSc sums fluxes at faces directly,
> perhaps this IS symmetric? Toby might know.
>
>
> If you are talking about solving Poisson on a composite grid, then
> refluxing and summing up fluxes are probably the same procedure.
>

I am not familiar with the terminology used here. What does the refluxing
mean?


>
> Users of these kinds of discretizations usually want to use the
> conservative divergence at coarse-fine interfaces, and so the main question
> is how to set up the viscous/diffusive flux stencil at coarse-fine
> interfaces (or, equivalently, the stencil for evaluating ghost cell values
> at coarse-fine interfaces). It is possible to make the overall
> discretization symmetric if you use a particular stencil for the flux
> computation. I think this paper (
> http://www.ams.org/journals/mcom/1991-56-194/S0025-5718-1991-1066831-5/S0025-5718-1991-1066831-5.pdf)
> is one place to look. (This stuff is related to "mimetic finite difference"
> discretizations of Poisson.) This coarse-fine interface discretization
> winds up being symmetric (although possibly only w.r.t. a weighted inner
> product --- I can't remember the details), but the fluxes are only
> first-order accurate at coarse-fine interfaces.
>
>
Right. I think if the discretization is conservative, i.e. discretizing div
of grad, and is compact, i.e. only involves neighboring cells sharing a
common face, then it is possible to construct symmetric discretization. An
example, that I have used before in other contexts, is described here:
http://physbam.stanford.edu/~fedkiw/papers/stanford2004-02.pdf

An interesting observation is although the fluxes are only first order
accurate, the final solution to the linear system exhibits super
convergence, i.e. second-order accurate, even in L_inf. Similar behavior is
observed with non-conservative, node-based finite difference
discretizations.

-- Boyce
>


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-26 Thread Mohammad Mirzadeh
Mark,

On Fri, Feb 26, 2016 at 8:12 AM, Mark Adams  wrote:

>
>>4-4) Along the same lines, I tried a couple of other PCs such as
>> {jacobi, sor, gamg, ilu} and none of them were able to converge with bcgs
>> as the KSP. However, with gmres, almost all of them converge with the
>> exception of gamg.
>>
>
> Note, I'm not sure why you need the null space of A^T, you want the null
> space of A.
>
>
So the idea was to provide nullspace of A^T to make sure the true residual
also converges to zero by projecting the RHS onto the range of A. It
however looks like that GMRES (and sometimes BiCGSTAB) converge in the
least-square sense for which you only need the nullspace of A and not A^T.

And for singular systems like yours you need to use a pseudo inverse of the
> coarse grid because it is singular -- if you represent the null space
> exactly.
>
> GAMG is use for AMR problems like this a lot in BISICLES.
>

Thanks for the reference. However, a quick look at their paper suggests
they are using a finite volume discretization which should be symmetric and
avoid all the shenanigans I'm going through! I think it would actually be a
good idea for me to swap my solver with a conservative one and see if it
makes things better.


> You need to use an 'svd' coarse grid solver, or an appropriate iterative
> solver. LU is the default.
>
>
I see. How can I change the GAMG coarse grid solver? Is there an analogue
of "-pc_hypre_boomeramg_relax_type_coarse"?

Mark
>

Thanks,
Mohammad


Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-25 Thread Mohammad Mirzadeh
Thanks a lot Barry. This was very helpful :)

On Thu, Feb 25, 2016 at 6:05 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:

>
> > On Feb 25, 2016, at 4:18 PM, Mohammad Mirzadeh <mirza...@gmail.com>
> wrote:
> >
> > Barry,
> >
> > Thanks a lot for the detailed discussion. Things make much more sense
> now, especially I was confused why the manual says to provide call both
> 'MatSetNullSpace' and 'MatSetTransposeNullSpace'. I have couple of more
> questions and some observations I have made since yesterday.
> >
> > 1) Is there a systematic way to tell KSP to stop when it stagnates? I am
> _not_ using SNES.
>
>   No, I added the issue
> https://bitbucket.org/petsc/petsc/issues/122/ksp-convergence-test-for-inconsistent
> writing the code for a new test is pretty simple, but you need to decide
> mathematically how you are going to detect "stagnation".
>
> >
> > 2) Once KSP converges to the least-square solution, the residual must be
> in the nullspace of A^T because otherwise it would have been reduced by the
> solver. So is it (at lest theoretically) possible to use the residual
> vector as an approximate basis for the n(A^T)? In general this wouldn't be
> enough but I'm thinking since the nullspace is one-dimensional, maybe I
> could use the residual itself to manually project solution onto range of A
> after calling KSPSolve?
>
>   I don't see why this wouldn't work. Just run one initial solve till
> stagnation and then use the resulting residual as the null space for A^T
> for future solves (with the same matrix, of course). It could be
> interesting to plot the residual to see what it looks like and it that
> makes sense physically
>
> >
> > 3) Are preconditoners applied to the left by default? If not which one
> are and which one are not?
>
>   It actually depends on the KSP, some algorithms only support right
> preconditioning, some implementations only support one or the other
> (depending on who wrote it) and some support both.  In PETSc CG, GMRES, and
> BiCGstab use left by default, both GMRES and BiCGstab  also support right.
>
> >
> > 4) So then if I provide the nullspace of A, KSP residual should
> converge, correct? I have made a few observations:
> >
> >4-1) When I provide the nullspace of A through MatSetNullSpace, I
> (generally) do see the residual (preconditioned) become very small (~1e-12
> or so) but then it sometimes stagnates (say around 1e-10). Is this normal?
>
>   There is only some much convergence one can expect for linear solvers;
> how far one can drive down the residual depends at least in part on the
> conditioning of the matrix. The higher the conditioning the less you can
> get in tolerance.
>
>
> >
> >4-2) Another observation is that the true residual stagnates way
> earlier which I assume is an indication that the RHS is in fact not in the
> range of A.
>
>You can hope this is the case.
>
>Of course one cannot really know if the residual is stagnating due to
> an inconsistent right hand side or for some other reason. But if it
> stagnates at 10^-4 it is probably due to inconsistent right hand side if it
> stagnates at 10^-12 the right hand side is probably consistent. If it
> stagnates at 10^-8 then it could be due to inconsistent rhs and or some
> other reason.
> >
> >4-3) Also, I've seen that the choice of KSP and PC have considerable
> effects on the solver. For instance, by default I use hypre+bcgs and I have
> noticed I need to change coarse-grid relaxation from Gaussian Elimination
> to symmetric-sor/Jacobi otherwise hypre has issues. I assume this is
> because the AMG preconditioner is also singular on the coarse level?
>
>Yes this is likely the reason.
> >
> >4-4) Along the same lines, I tried a couple of other PCs such as
> {jacobi, sor, gamg, ilu} and none of them were able to converge with bcgs
> as the KSP. However, with gmres, almost all of them converge
>
>Sure this is expected
>
> > with the exception of gamg.
> >
> >4-5) If I use gmres instead of bcgs, and for any of {jacobi, sor,
> ilu}, the true residual seems to be generally 2-3 orders of magnitude
> smaller (1e-6 vs 1e-3). I suppose this is just because gmres is more robust?
>
>   Yes, with a single system I would recommend sticking with GMRES and
> avoiding Bcgs.
> >
> >4-6) With hypre, the true residual is always larger (~1e-3) than
> other PCs no matter if I use bcgs or gmres.
>
>ok.
>
> >
> > Sorry for the long discussion but this has turned out to be quite
> educational for me!
> >
> > Thanks,
> > Mohammad
> >
> > On Wed, Feb 

Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-25 Thread Mohammad Mirzadeh
Barry,

Thanks a lot for the detailed discussion. Things make much more sense now,
especially I was confused why the manual says to provide call both
'MatSetNullSpace' and 'MatSetTransposeNullSpace'. I have couple of more
questions and some observations I have made since yesterday.

1) Is there a systematic way to tell KSP to stop when it stagnates? I am
_not_ using SNES.

2) Once KSP converges to the least-square solution, the residual must be in
the nullspace of A^T because otherwise it would have been reduced by the
solver. So is it (at lest theoretically) possible to use the residual
vector as an approximate basis for the n(A^T)? In general this wouldn't be
enough but I'm thinking since the nullspace is one-dimensional, maybe I
could use the residual itself to manually project solution onto range of A
after calling KSPSolve?

3) Are preconditoners applied to the left by default? If not which one are
and which one are not?

4) So then if I provide the nullspace of A, KSP residual should converge,
correct? I have made a few observations:

   4-1) When I provide the nullspace of A through MatSetNullSpace, I
(generally) do see the residual (preconditioned) become very small (~1e-12
or so) but then it sometimes stagnates (say around 1e-10). Is this normal?

   4-2) Another observation is that the true residual stagnates way earlier
which I assume is an indication that the RHS is in fact not in the range of
A.

   4-3) Also, I've seen that the choice of KSP and PC have considerable
effects on the solver. For instance, by default I use hypre+bcgs and I have
noticed I need to change coarse-grid relaxation from Gaussian Elimination
to symmetric-sor/Jacobi otherwise hypre has issues. I assume this is
because the AMG preconditioner is also singular on the coarse level?

   4-4) Along the same lines, I tried a couple of other PCs such as
{jacobi, sor, gamg, ilu} and none of them were able to converge with bcgs
as the KSP. However, with gmres, almost all of them converge with the
exception of gamg.

   4-5) If I use gmres instead of bcgs, and for any of {jacobi, sor, ilu},
the true residual seems to be generally 2-3 orders of magnitude smaller
(1e-6 vs 1e-3). I suppose this is just because gmres is more robust?

   4-6) With hypre, the true residual is always larger (~1e-3) than other
PCs no matter if I use bcgs or gmres.

Sorry for the long discussion but this has turned out to be quite
educational for me!

Thanks,
Mohammad

On Wed, Feb 24, 2016 at 2:21 PM, Barry Smith <bsm...@mcs.anl.gov> wrote:

>
> > On Feb 24, 2016, at 9:07 AM, Mohammad Mirzadeh <mirza...@gmail.com>
> wrote:
> >
> > Barry,
> >
> > On Wednesday, February 24, 2016, Barry Smith <bsm...@mcs.anl.gov> wrote:
> >
> > > On Feb 24, 2016, at 12:07 AM, Mohammad Mirzadeh <mirza...@gmail.com>
> wrote:
> >
> >
> > At the PDE level the compatibility condition is satisfied. However I
> suspect that at the discrete level the rhs is not not exactly in the range.
> The reason for my suspicion is that I know for a fact that my
> discretization is not conservative at the hanging nodes.
>
>Could be.
>
> >
> > Thanks for the suggestion. I'll give it a try. Howerver, does GMRES
> fundamentally behave differently than BiCGSTAB for these systems? I have
> seen people saying that it can keep the solution in the range if rhs is
> already in the range but I thought any KSP would do the same?
>
>
> Here is the deal.  There are two issues here
>
> 1)  Making sure that b is the in the range of A.  If b is not in the range
> of A then it is obviously impossible to find an x such that A x = b. If we
> divide b into a part in the range of A called br and the rest call it bp
> then b = br + bp   and one can solve Ax = br   and the left over residual
> is bp. Normally if you run GMRES with an inconsistent right hand side (that
> is bp != 0) it will solve Ax = br automatically and thus give you a "least
> squares" answer which is likely what you want. These means in some sense
> you don't really need to worry about making sure that b is in the range of
> A. But the residuals of GMRES will stagnate, which makes sense because it
> cannot get rid of the bp part. In the least squares sense however GMRES has
> converged. If you provide MatSetTransposeNullSpace() then KSP automatically
> removes this space from b when it starts the Krylov method, this is nice
> because the GMRES residuals will go to zero.
>
> 2) Making sure that huge multiples of the null space of A do not get into
> the computed solution.
>
> With left preconditioning Krylov methods construct the solution in the
> space {Bb, BABb, BABABb, ..} if the range of B contains entries in the null
> space of A then the Krylov space will contain vectors in the null space of
> A. What can happen is that

Re: [petsc-users] Neumann BC with non-symmetric matrix

2016-02-23 Thread Mohammad Mirzadeh
Barry,
On Wednesday, February 24, 2016, Barry Smith <bsm...@mcs.anl.gov> wrote:

>
> > On Feb 23, 2016, at 11:35 PM, Mohammad Mirzadeh <mirza...@gmail.com
> <javascript:;>> wrote:
> >
> > Dear all,
> >
> > I am dealing with a situation I was hoping to get some suggestions here.
> Suppose after discretizing a poisson equation with purely neumann (or
> periodic) bc I end up with a matrix that is *almost* symmetric, i.e. it is
> symmetric for almost all grid points with the exception of a few points.
>
>   How come it is not purely symmetric? The usual finite elements with pure
> Neumann or periodic bc will give a completely symmetric matrix.
>
>   Barry
>
>
So this is a finite difference discretization on adaptive Cartesian grids.
It turns out that the discretization is non-symmetric at the corse-fine
interface. It's actually not because of the BC itself.

> >
> > The correct way of handling this problem is by specifying the nullspace
> to MatSetNullSpace. However, since the matrix is non-symmetric in general I
> would need to pass the nullspace of A^T. Now it turns out that if A is
> *sufficiently close to being symmetric*, I can get away with the constant
> vector, which is the nullspace of A and not A^T, but obviously this does
> not always work. Sometimes the KSP converges and in other situations the
> residual stagnates which is to be expected.
> >
> > Now, here are my questions (sorry if they are too many!):
> >
> > 1) Is there any efficient way of calculating nullspace of A^T in this
> case? Is SVD the only way?
> >
> > 2) I have tried fixing the solution at an arbitrary point, and while it
> generally works, for some problems I get numerical artifacts, e.g. slight
> asymmetry in the solution and/or increased error close to the point where I
> fix the solution. Is this, more or less, expected as a known artifact?
> >
> > 3) An alternative to 2 is to enforce some global constraint on the
> solution, e.g. to require that the average be zero. My question here is
> two-fold:
>
>   Requiring the average be zero is exactly the same as providing a null
> space of the constant function. Saying the average is zero is the same as
> saying the solution is orthogonal to the constant function. I don't see any
> reason to introduce the Lagrange multiplier and all its complications
> inside of just providing the constant null space.


Is this also true at the discrete level when the matrix is non-symmetric? I
have always viewed this as just a constraint that could really be anything.


>
> >
> > 3-1) Is this generally any better than solution 2, in terms of not
> messing too much with the condition number of the matrix?
> >
> > 3-2) I don't quite know how to implement this using PETSc. Generally
> speaking I'd like to solve
> >
> > | AU |   | X |   | B |
> > || * |   | = |   |
> > | U^T  0 |   | s |   | 0 |
> >
> >
> > where U is a constant vector (of ones) and s is effectively a Lagrange
> multiplier. I suspect I need to use MatCreateSchurComplement and pass that
> to the KSP? Or do I need create my own matrix type from scratch through
> MatCreateShell?
> >
> > Any help is appreciated!
> >
> > Thanks,
> > Mohammad
> >
> >
>
>

-- 
Sent from Gmail Mobile


[petsc-users] Neumann BC with non-symmetric matrix

2016-02-23 Thread Mohammad Mirzadeh
Dear all,

I am dealing with a situation I was hoping to get some suggestions here.
Suppose after discretizing a poisson equation with purely neumann (or
periodic) bc I end up with a matrix that is *almost* symmetric, i.e. it is
symmetric for almost all grid points with the exception of a few points.

The correct way of handling this problem is by specifying the nullspace to
MatSetNullSpace. However, since the matrix is non-symmetric in general I
would need to pass the nullspace of A^T. Now it turns out that if A is
*sufficiently close to being symmetric*, I can get away with the constant
vector, which is the nullspace of A and not A^T, but obviously this does
not always work. Sometimes the KSP converges and in other situations the
residual stagnates which is to be expected.

Now, here are my questions (sorry if they are too many!):

1) Is there any efficient way of calculating nullspace of A^T in this case?
Is SVD the only way?

2) I have tried fixing the solution at an arbitrary point, and while it
generally works, for some problems I get numerical artifacts, e.g. slight
asymmetry in the solution and/or increased error close to the point where I
fix the solution. Is this, more or less, expected as a known artifact?

3) An alternative to 2 is to enforce some global constraint on the
solution, e.g. to require that the average be zero. My question here is
two-fold:

3-1) Is this generally any better than solution 2, in terms of not messing
too much with the condition number of the matrix?

3-2) I don't quite know how to implement this using PETSc. Generally
speaking I'd like to solve

| AU |   | X |   | B |
|| * |   | = |   |
| U^T  0 |   | s |   | 0 |


where U is a constant vector (of ones) and s is effectively a Lagrange
multiplier. I suspect I need to use MatCreateSchurComplement and pass that
to the KSP? Or do I need create my own matrix type from scratch through
MatCreateShell?

Any help is appreciated!

Thanks,
Mohammad


Re: [petsc-users] soname seems to be absent in OS-X

2015-10-27 Thread Mohammad Mirzadeh
Denis,

I use Homebrew to manage petsc and all related dependencies on OSX which
uses .dylib for shared libs instead of .so. However, I have not had any
issues linking against them just the usual way by passing -L/path/to/lib
-lwhatever (you need to pass -Wl,-rpath,/path/to/lib as well if lib is not
in run-time search path).

Hope this helps.


On Tue, Oct 27, 2015 at 12:28 PM, Denis Davydov  wrote:

> Dear developers,
>
> It seems that the compiled PETSc library does not have a soname
> (-Wl,-install_name=xyz.dylib in OS-X).
> I compile PETSc/SLEPc using Homebrew both on OS-X and Linux and judging
> from ldd/otool there is indeed a difference:
>
> Linux (soname is there):
>
> $ ldd .linuxbrew_openblas/opt/slepc/lib/libslepc.so | grep "petsc"
> libpetsc.so.3.6 => 
> /home/woody/iwtm/iwtm108/.linuxbrew_openblas/Cellar/petsc/3.6.2/real/lib/libpetsc.so.3.6
>  (0x7fac7822f000)
>
> OS-X (no soname):
>
> $ otool -L /usr/local/opt/slepc/lib/libslepc.dylib | grep "petsc"
> /usr/local/opt/petsc/real/lib/libpetsc.3.6.2.dylib (compatibility version 
> 3.6.0, current version 3.6.2)
>
>
> I do not see `-Wl,-soname=xyz` in linking flags and nothing like
> `-Wl,-install_name=xyz` is there on OS-X either.
> Any mac users can comment on this?
>
> p.s. as Macports web is down i can’t check what folks are doing there.
>
> Kind regards,
> Denis
>
>


[petsc-users] having issues with nullspace

2014-03-06 Thread Mohammad Mirzadeh
Hi guys,

I have a discretization of Poisson equation with Neumann bc for embedded
boundary grids in such a way that that nullspace is not the usual constant
vector. Instead the nullspace is constant in the domain of interest and
zero elsewhere.

I compute this nullspace myself and have checked it against MATLAB by
dumping the matrix and computing the nullspace explicitly using null
function -- they match and there is only this single vector. Then I take
this calculated vector and subtract it off the matrix and rhs.

However, I am having convergence issues. For instance this is the output of
ksp_monitor_true_residual for one particular run:

  0 KSP preconditioned resid norm 3.033840960250e+02 true resid norm
2.332886580745e-01 ||r(i)||/||b|| 1.e+00
  1 KSP preconditioned resid norm 1.018974811826e+01 true resid norm
1.941629896918e-02 ||r(i)||/||b|| 8.322864527335e-02
  2 KSP preconditioned resid norm 5.450493684941e-02 true resid norm
1.029339589324e-02 ||r(i)||/||b|| 4.412300185615e-02
  3 KSP preconditioned resid norm 3.944064039516e-02 true resid norm
1.030277925024e-02 ||r(i)||/||b|| 4.416322394443e-02
  4 KSP preconditioned resid norm 6.286181172600e-05 true resid norm
1.030243055045e-02 ||r(i)||/||b|| 4.416172923059e-02
  5 KSP preconditioned resid norm 4.349133658643e-06 true resid norm
1.030239080406e-02 ||r(i)||/||b|| 4.416155885630e-02
  6 KSP preconditioned resid norm 9.279429568232e-08 true resid norm
1.030239169298e-02 ||r(i)||/||b|| 4.41615626e-02
  7 KSP preconditioned resid norm 3.032522248740e-09 true resid norm
1.030239175066e-02 ||r(i)||/||b|| 4.416156291393e-02
  8 KSP preconditioned resid norm 6.533747246875e-09 true resid norm
1.030239175718e-02 ||r(i)||/||b|| 4.416156294184e-02
  9 KSP preconditioned resid norm 6.083185162500e-12 true resid norm
1.030239175259e-02 ||r(i)||/||b|| 4.416156292220e-02
 10 KSP preconditioned resid norm 5.51031965e-12 true resid norm
1.030239175259e-02 ||r(i)||/||b|| 4.416156292221e-02
 11 KSP preconditioned resid norm 5.456758524534e-12 true resid norm
1.030239175259e-02 ||r(i)||/||b|| 4.416156292221e-02
 12 KSP preconditioned resid norm 5.456756081783e-12 true resid norm
1.030239175259e-02 ||r(i)||/||b|| 4.416156292221e-02
 13 KSP preconditioned resid norm 5.456755930952e-12 true resid norm
1.030239175259e-02 ||r(i)||/||b|| 4.416156292221e-02
 14 KSP preconditioned resid norm 5.456755930949e-12 true resid norm
1.030239175259e-02 ||r(i)||/||b|| 4.416156292221e-02
 15 KSP preconditioned resid norm 5.456755930949e-12 true resid norm
1.030239175259e-02 ||r(i)||/||b|| 4.416156292221e-02


As you can see, the true residual is quite large and moreover it does
not reduce beyond a certain point. This is using hypre as
preconditioner, but the situation is equally bad with several other
preconditioner (ilu, sor, jacobi, or even none). As for the solution
itself, the error has poor to none convergence under grid refinement.
All this suggests that the linear system is not converging in my case.


Do you have any idea/suggestions why this is happening and how I can avoid it?


Thanks


Re: [petsc-users] having issues with nullspace

2014-03-06 Thread Mohammad Mirzadeh
Yes. To be precise this is the set of functions I call:

ierr = MatNullSpaceCreate(mpicomm, PETSC_FALSE, 1, null_space,
A_null_space); CHKERRXX(ierr);

ierr = MatSetNullSpace(A, A_null_space); CHKERRXX(ierr);

ierr = KSPSetNullSpace(ksp, A_null_space); CHKERRXX(ierr);

ierr = MatNullSpaceRemove(A_null_space, rhs_, NULL); CHKERRXX(ierr);

ierr = KSPSolve(ksp, rhs_, solution); CHKERRXX(ierr);




On Thu, Mar 6, 2014 at 3:33 PM, Matthew Knepley knep...@gmail.com wrote:

 On Thu, Mar 6, 2014 at 5:24 PM, Mohammad Mirzadeh mirza...@gmail.comwrote:

 Hi guys,

 I have a discretization of Poisson equation with Neumann bc for embedded
 boundary grids in such a way that that nullspace is not the usual constant
 vector. Instead the nullspace is constant in the domain of interest and
 zero elsewhere.

 I compute this nullspace myself and have checked it against MATLAB by
 dumping the matrix and computing the nullspace explicitly using null
 function -- they match and there is only this single vector. Then I take
 this calculated vector and subtract it off the matrix and rhs.


 subtract it off the matrix does not make sense to me. Are you calling
 KSPSetNullSpace()?

Matt


 However, I am having convergence issues. For instance this is the output
 of ksp_monitor_true_residual for one particular run:

   0 KSP preconditioned resid norm 3.033840960250e+02 true resid norm 
 2.332886580745e-01 ||r(i)||/||b|| 1.e+00
   1 KSP preconditioned resid norm 1.018974811826e+01 true resid norm 
 1.941629896918e-02 ||r(i)||/||b|| 8.322864527335e-02
   2 KSP preconditioned resid norm 5.450493684941e-02 true resid norm 
 1.029339589324e-02 ||r(i)||/||b|| 4.412300185615e-02
   3 KSP preconditioned resid norm 3.944064039516e-02 true resid norm 
 1.030277925024e-02 ||r(i)||/||b|| 4.416322394443e-02
   4 KSP preconditioned resid norm 6.286181172600e-05 true resid norm 
 1.030243055045e-02 ||r(i)||/||b|| 4.416172923059e-02
   5 KSP preconditioned resid norm 4.349133658643e-06 true resid norm 
 1.030239080406e-02 ||r(i)||/||b|| 4.416155885630e-02
   6 KSP preconditioned resid norm 9.279429568232e-08 true resid norm 
 1.030239169298e-02 ||r(i)||/||b|| 4.41615626e-02
   7 KSP preconditioned resid norm 3.032522248740e-09 true resid norm 
 1.030239175066e-02 ||r(i)||/||b|| 4.416156291393e-02
   8 KSP preconditioned resid norm 6.533747246875e-09 true resid norm 
 1.030239175718e-02 ||r(i)||/||b|| 4.416156294184e-02
   9 KSP preconditioned resid norm 6.083185162500e-12 true resid norm 
 1.030239175259e-02 ||r(i)||/||b|| 4.416156292220e-02
  10 KSP preconditioned resid norm 5.51031965e-12 true resid norm 
 1.030239175259e-02 ||r(i)||/||b|| 4.416156292221e-02
  11 KSP preconditioned resid norm 5.456758524534e-12 true resid norm 
 1.030239175259e-02 ||r(i)||/||b|| 4.416156292221e-02
  12 KSP preconditioned resid norm 5.456756081783e-12 true resid norm 
 1.030239175259e-02 ||r(i)||/||b|| 4.416156292221e-02
  13 KSP preconditioned resid norm 5.456755930952e-12 true resid norm 
 1.030239175259e-02 ||r(i)||/||b|| 4.416156292221e-02
  14 KSP preconditioned resid norm 5.456755930949e-12 true resid norm 
 1.030239175259e-02 ||r(i)||/||b|| 4.416156292221e-02
  15 KSP preconditioned resid norm 5.456755930949e-12 true resid norm 
 1.030239175259e-02 ||r(i)||/||b|| 4.416156292221e-02


 As you can see, the true residual is quite large and moreover it does not 
 reduce beyond a certain point. This is using hypre as preconditioner, but 
 the situation is equally bad with several other preconditioner (ilu, sor, 
 jacobi, or even none). As for the solution itself, the error has poor to 
 none convergence under grid refinement. All this suggests that the linear 
 system is not converging in my case.


 Do you have any idea/suggestions why this is happening and how I can avoid 
 it?


 Thanks




 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener



Re: [petsc-users] having issues with nullspace

2014-03-06 Thread Mohammad Mirzadeh
Matt,

here's the output:

  0 KSP Residual norm 3.033840960250e+02

  1 KSP Residual norm 1.018974811826e+01

  2 KSP Residual norm 5.450493684941e-02

  3 KSP Residual norm 3.944064039516e-02

  4 KSP Residual norm 6.286181172600e-05

  5 KSP Residual norm 4.349133658643e-06

  6 KSP Residual norm 9.279429568232e-08

  7 KSP Residual norm 3.032522248740e-09

  8 KSP Residual norm 6.533747246875e-09

  9 KSP Residual norm 6.083185162500e-12

Linear solve converged due to CONVERGED_RTOL iterations 9

KSP Object: 1 MPI processes

  type: bcgs

  maximum iterations=1, initial guess is zero

  tolerances:  relative=1e-12, absolute=1e-50, divergence=1

  left preconditioning

  has attached null space

  using PRECONDITIONED norm type for convergence test

PC Object: 1 MPI processes

  type: hypre

HYPRE BoomerAMG preconditioning

HYPRE BoomerAMG: Cycle type V

HYPRE BoomerAMG: Maximum number of levels 25

HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1

HYPRE BoomerAMG: Convergence tolerance PER hypre call 0

HYPRE BoomerAMG: Threshold for strong coupling 0.5

HYPRE BoomerAMG: Interpolation truncation factor 0

HYPRE BoomerAMG: Interpolation: max elements per row 0

HYPRE BoomerAMG: Number of levels of aggressive coarsening 0

HYPRE BoomerAMG: Number of paths for aggressive coarsening 1

HYPRE BoomerAMG: Maximum row sums 0.9

HYPRE BoomerAMG: Sweeps down 1

HYPRE BoomerAMG: Sweeps up   1

HYPRE BoomerAMG: Sweeps on coarse1

HYPRE BoomerAMG: Relax down  symmetric-SOR/Jacobi

HYPRE BoomerAMG: Relax upsymmetric-SOR/Jacobi

HYPRE BoomerAMG: Relax on coarse symmetric-SOR/Jacobi

HYPRE BoomerAMG: Relax weight  (all)  1

HYPRE BoomerAMG: Outer relax weight (all) 1

HYPRE BoomerAMG: Using CF-relaxation

HYPRE BoomerAMG: Measure typelocal

HYPRE BoomerAMG: Coarsen typeFalgout

HYPRE BoomerAMG: Interpolation type  classical

  linear system matrix = precond matrix:

  Mat Object:   1 MPI processes

type: seqaij

rows=263169, cols=263169

total: nonzeros=1236141, allocated nonzeros=1244417

total number of mallocs used during MatSetValues calls =0

  has attached null space

  not using I-node routines




On Thu, Mar 6, 2014 at 3:43 PM, Jed Brown j...@jedbrown.org wrote:

 Mohammad Mirzadeh mirza...@gmail.com writes:

  Yes. To be precise this is the set of functions I call:
 
  ierr = MatNullSpaceCreate(mpicomm, PETSC_FALSE, 1, null_space,
  A_null_space); CHKERRXX(ierr);
 
  ierr = MatSetNullSpace(A, A_null_space); CHKERRXX(ierr);
 
  ierr = KSPSetNullSpace(ksp, A_null_space); CHKERRXX(ierr);
 
  ierr = MatNullSpaceRemove(A_null_space, rhs_, NULL); CHKERRXX(ierr);

 Is the matrix symmetric?  If not, the right and left null spaces could
 be different, in which case this system might be inconsistent.



Re: [petsc-users] having issues with nullspace

2014-03-06 Thread Mohammad Mirzadeh
Jed,

No the matrix is actually non-symmetric due to grid adaptivity (hanging
nodes of QuadTree). Anyway, what do you exactly mean by the system being
inconsistent?



On Thu, Mar 6, 2014 at 3:49 PM, Mohammad Mirzadeh mirza...@gmail.comwrote:

 Matt,

 here's the output:

   0 KSP Residual norm 3.033840960250e+02

   1 KSP Residual norm 1.018974811826e+01

   2 KSP Residual norm 5.450493684941e-02

   3 KSP Residual norm 3.944064039516e-02

   4 KSP Residual norm 6.286181172600e-05

   5 KSP Residual norm 4.349133658643e-06

   6 KSP Residual norm 9.279429568232e-08

   7 KSP Residual norm 3.032522248740e-09

   8 KSP Residual norm 6.533747246875e-09

   9 KSP Residual norm 6.083185162500e-12

 Linear solve converged due to CONVERGED_RTOL iterations 9

 KSP Object: 1 MPI processes

   type: bcgs

   maximum iterations=1, initial guess is zero

   tolerances:  relative=1e-12, absolute=1e-50, divergence=1

   left preconditioning

   has attached null space

   using PRECONDITIONED norm type for convergence test

 PC Object: 1 MPI processes

   type: hypre

 HYPRE BoomerAMG preconditioning

 HYPRE BoomerAMG: Cycle type V

 HYPRE BoomerAMG: Maximum number of levels 25

 HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1

 HYPRE BoomerAMG: Convergence tolerance PER hypre call 0

 HYPRE BoomerAMG: Threshold for strong coupling 0.5

 HYPRE BoomerAMG: Interpolation truncation factor 0

 HYPRE BoomerAMG: Interpolation: max elements per row 0

 HYPRE BoomerAMG: Number of levels of aggressive coarsening 0

 HYPRE BoomerAMG: Number of paths for aggressive coarsening 1

 HYPRE BoomerAMG: Maximum row sums 0.9

 HYPRE BoomerAMG: Sweeps down 1

 HYPRE BoomerAMG: Sweeps up   1

 HYPRE BoomerAMG: Sweeps on coarse1

 HYPRE BoomerAMG: Relax down  symmetric-SOR/Jacobi

 HYPRE BoomerAMG: Relax upsymmetric-SOR/Jacobi

 HYPRE BoomerAMG: Relax on coarse symmetric-SOR/Jacobi

 HYPRE BoomerAMG: Relax weight  (all)  1

 HYPRE BoomerAMG: Outer relax weight (all) 1

 HYPRE BoomerAMG: Using CF-relaxation

 HYPRE BoomerAMG: Measure typelocal

 HYPRE BoomerAMG: Coarsen typeFalgout

 HYPRE BoomerAMG: Interpolation type  classical

   linear system matrix = precond matrix:

   Mat Object:   1 MPI processes

 type: seqaij

 rows=263169, cols=263169

 total: nonzeros=1236141, allocated nonzeros=1244417

 total number of mallocs used during MatSetValues calls =0

   has attached null space

   not using I-node routines




 On Thu, Mar 6, 2014 at 3:43 PM, Jed Brown j...@jedbrown.org wrote:

 Mohammad Mirzadeh mirza...@gmail.com writes:

  Yes. To be precise this is the set of functions I call:
 
  ierr = MatNullSpaceCreate(mpicomm, PETSC_FALSE, 1, null_space,
  A_null_space); CHKERRXX(ierr);
 
  ierr = MatSetNullSpace(A, A_null_space); CHKERRXX(ierr);
 
  ierr = KSPSetNullSpace(ksp, A_null_space); CHKERRXX(ierr);
 
  ierr = MatNullSpaceRemove(A_null_space, rhs_, NULL); CHKERRXX(ierr);

 Is the matrix symmetric?  If not, the right and left null spaces could
 be different, in which case this system might be inconsistent.





Re: [petsc-users] having issues with nullspace

2014-03-06 Thread Mohammad Mirzadeh
hummm just tried it in Matlab and you are correct -- they are different.
What does this mean for my system? Also what is the correct approach here?


On Thu, Mar 6, 2014 at 4:02 PM, Jed Brown j...@jedbrown.org wrote:

 Mohammad Mirzadeh mirza...@gmail.com writes:

  Jed,
 
  No the matrix is actually non-symmetric due to grid adaptivity (hanging
  nodes of QuadTree). Anyway, what do you exactly mean by the system being
  inconsistent?

 Sounds like the right and left null spaces are different.  You can test
 by checking the null space using the transpose of your system.



Re: [petsc-users] having issues with nullspace

2014-03-06 Thread Mohammad Mirzadeh
I am confused -- are you saying that I need to project the left null space
out of the rhs?


On Thu, Mar 6, 2014 at 4:10 PM, Jed Brown j...@jedbrown.org wrote:

 Mohammad Mirzadeh mirza...@gmail.com writes:

  hummm just tried it in Matlab and you are correct -- they are different.
  What does this mean for my system? Also what is the correct approach
 here?

 It means you projected the wrong null space out of your right hand side.



Re: [petsc-users] question about the PETSc Vec object and C++ destructors

2014-02-05 Thread Mohammad Mirzadeh
There are couple of things you are doing wrong here!

 Shouldn't it be destroying the Vec declared by the very first thing t,
and hence throwing an error saying that you can't destroy a v that has not
been created?

No. The destructor of an object is called when either 1) manually call
`delete` on an instance allocated via `new` or 2) when the object is
allocated on the stack and goes out of scope. In your case the destructor
for `t` is called when program reaches the end of `main` i.e. when it
terminates.

In your case, you are first creating an empty object using the default ctor
and then assigning it to a temporary object `thing(2)`. Once
the assignment is done, the destructor of the temporary object is called.
Also note that since you have not implemented an assignment operator for
your class, the default is tiled which merely copies `x` which is most
certainly what you want since in PETSc `Vec` is simply an opaque pointer.
This means to properly be able to copy one `thing` to another you need to
implement the assignment operator and most probably also the copy ctor .
See http://stackoverflow.com/questions/4172722/what-is-the-rule-of-threefor
more details on this.

Also If you are going to depend on dtor to destroy pets objects, you need
to implement a class that initializes and finalizes PETSc through ctor and
dtor and call it before calling any other object. Otherwise all other
classes will be calling their dtors after PetscFinalize. Something like
this should serve the purpose

class PetscSession{
public:
PetscSession(int argc, char* argv[]){
PetscInitialize(argc, argv, NULL, NULL);
 }
~PetscSession(){
PetscFinalize();
}
};

int main(int argc, char* argv[]){
PetscSession petsc(argc, argv);
// All other classes come after PetscSession has been called
 return 0;
}



On Wed, Feb 5, 2014 at 7:14 PM, Matthew Knepley knep...@gmail.com wrote:

 On Wed, Feb 5, 2014 at 8:02 PM, David Liu dave...@mit.edu wrote:

 Hi, this is a question mainly to clear up my understanding of what the
 Vec object is. Consider the following C++ code:

 //=

 #include petsc.h



  class thing{

 public:
 Vec x;
  thing(){};
 thing(int N){
 PetscPrintf(PETSC_COMM_WORLD, x before create = %i\n, x);
  VecCreateSeq(PETSC_COMM_SELF, N, x);
 PetscPrintf(PETSC_COMM_WORLD, x after create = %i\n, x);
  }
  ~thing(){
 PetscPrintf(PETSC_COMM_WORLD, x before destroy = %i\n, x);
  VecDestroy(x);
 PetscPrintf(PETSC_COMM_WORLD, x after destroy = %i\n, x);
  }
 };


 int main(int argc, char** argv){

  PetscInitialize(argc, argv, PETSC_NULL, PETSC_NULL);

 thing t;
 PetscPrintf(PETSC_COMM_WORLD, x before everything = %i\n, t.x);
  t = thing(2);
 PetscPrintf(PETSC_COMM_WORLD, x after everything = %i\n, t.x);
  PetscFinalize();
 }

 //=

 The output, when run sequentially, is

 x before everything = 0
 x before create = 0
 x after create = -326926224
 x before destroy = -326926224
 x after destroy = 0
 x after everything = -326926224

 (among some unimportant error messages). If I try to VecGetSize(t.x, N),
 immediately after the line t = thing(2), I get an error indicating that
 t.x has been destroyed.

 This behavior, as well as the printed output, suggests that the
 destructor being called during the line t = thing(2) is destroying the
 Vec just created by thing(2). Shouldn't it be destroying the Vec declared
 by the very first thing t, and hence throwing an error saying that you
 can't destroy a v that has not been created?


 This has nothing to do with Vec, it is about C++ copy semantics. This is
 why I would
 never tell someone to use C++.

 Matt

 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener



[petsc-users] saving log info

2014-01-30 Thread Mohammad Mirzadeh
Hi guys,

I'd like to be able to post-process the values reported by -log_summary. Is
there any programmatic way to extract the data and save it manually to a
file?

If not, is there any parser that go over stdout dump and extract the info?

Mohammad


Re: [petsc-users] saving log info

2014-01-30 Thread Mohammad Mirzadeh
Thanks Jed. This looks interesting.


On Thu, Jan 30, 2014 at 2:45 PM, Jed Brown j...@jedbrown.org wrote:

 Mohammad Mirzadeh mirza...@gmail.com writes:

  Hi guys,
 
  I'd like to be able to post-process the values reported by -log_summary.
 Is
  there any programmatic way to extract the data and save it manually to a
  file?
 
  If not, is there any parser that go over stdout dump and extract the
 info?

 I made a somewhat specialized parser and plotter here

   https://github.com/jedbrown/petscplot

 Unfortunately, I never turned it into a library or easily-reusable tool.
 These plots were for convergence performance with grid sequencing.

 https://github.com/jedbrown/petscplot/wiki/PETSc-Plot


 The parser is robust and easy to modify for whatever grammar you have
 (depending on what you're logging).



[petsc-users] question about PetscLogEvents

2013-12-04 Thread Mohammad Mirzadeh
Hi guys,

I'm gathering statistics for scaling analysis of my code and I'm making use
of PetscLogEvent objects. Some of events I'm trying to log have run-times
of few (1-5) seconds. Should I worry about any additional overheads?

In general how long should an event be to not worry about overheads?

Thanks


Re: [petsc-users] ISLocalToGlobalMapping + VecCreateGhost

2013-10-02 Thread Mohammad Mirzadeh
Matt,

I just discovered that my approach does not work properly with
VecGhostUpdateBegin/End -- I get wrong data after ghost update
process. Any idea why? When I change the IS, is PETSc aware that my
ghost values are not just at the end of vector?

Mohammad

On Wed, Oct 2, 2013 at 4:13 AM, Matthew Knepley knep...@gmail.com wrote:
 On Wed, Oct 2, 2013 at 12:27 AM, Mohammad Mirzadeh mirza...@gmail.com
 wrote:

 Hi guys,

 I just did something by pure guessing which seems to work and I want
 to make sure its the right thing!

 I have a specific layout for my vectors that look like this

  --
 | ghost values | local values | ghost values |
  --
 0, 1, 2, ...m, m+1, ...   m+n, m+n+1 ... N

 which means all nodes with index [0,m) and [m+n, N) are to be treated
 as ghost and all intermediate ones as local. Since PETSc stores the
 ghost values at the end of ghosted vec, so far I have been manually
 mapping back and forth between petsc and my application numbering.
 After profiling my code, it turned out that about 15% of run-time was
 just doing this mapping which is absolutely ridiculous!

 Anyway, to fix this now I set up an ISLocalToGlobalMapping object
 using my own application original local2global relation shown above.
 Then I create the petsc vec like this:

 // Create PetscVec
 // We do not care about the actual global index of ghost nodes at this
 point
 std::vectorPetscInt ghost_nodes(N - n -1, 0);
 ierr = VecCreateGhost(mpicomm, n+1, num_global, ghost_nodes.size(),
 (const PetscInt*)ghost_nodes[0], v); CHKERRQ(ierr);


 // Apply mapping
 ierr = VecSetLocalToGlobalMapping(v, mapping); CHKERRQ(ierr);

 After this point I do the usual VecGetArray on the vec and set the
 values, but this time without any extra mapping ... my very simple
 test seems to be ok. Is this the correct way of using
 ISLocalToGlobalMapping?


 Yes, this is the intention. And at a higher level, PETSc now tends to use a
 DM to
 organize this process. You tell the DM about your data layout (somewhat like
 giving
 the L2G mapping) and then you can DMGetLocalVector(), DMGetGlobalVector(),
 and
 DMLocalToGlobal().

   Thanks,

  Matt


 I guess I'm suspicious because VecCreateGhost is supposed to
 internally set the mapping and it is supposed to position ghost nodes
 at the end of the array which I don't want it to do...

 Thanks and sorry about the long email!




 --
 What most experimenters take for granted before they begin their experiments
 is infinitely more interesting than any results to which their experiments
 lead.
 -- Norbert Wiener


Re: [petsc-users] ISLocalToGlobalMapping + VecCreateGhost

2013-10-02 Thread Mohammad Mirzadeh
Ok I think I get it now. I'll look into DM (and probably come back
with more questions!)

Thanks

On Wed, Oct 2, 2013 at 11:36 AM, Matthew Knepley knep...@gmail.com wrote:
 On Wed, Oct 2, 2013 at 1:32 PM, Mohammad Mirzadeh mirza...@gmail.com
 wrote:

 Matt,

 I just discovered that my approach does not work properly with
 VecGhostUpdateBegin/End -- I get wrong data after ghost update
 process. Any idea why? When I change the IS, is PETSc aware that my
 ghost values are not just at the end of vector?


 Then I was probably misunderstanding what you want to do.

   1) L2GM is for translating local to global indices automatically, but
 knows nothing about ghosting in the Vec

   2) For ghosting, you have a local vector and global vector and a
 VecScatter that maps between them

   3) VecGhost is a special kind of 2) since we know that the local vector
 fits in side the global vec

   4) Since you break relationship 3), I would just use the DM (as I say
 below).

   Matt


 Mohammad

 On Wed, Oct 2, 2013 at 4:13 AM, Matthew Knepley knep...@gmail.com wrote:
  On Wed, Oct 2, 2013 at 12:27 AM, Mohammad Mirzadeh mirza...@gmail.com
  wrote:
 
  Hi guys,
 
  I just did something by pure guessing which seems to work and I want
  to make sure its the right thing!
 
  I have a specific layout for my vectors that look like this
 
   --
  | ghost values | local values | ghost values |
   --
  0, 1, 2, ...m, m+1, ...   m+n, m+n+1 ... N
 
  which means all nodes with index [0,m) and [m+n, N) are to be treated
  as ghost and all intermediate ones as local. Since PETSc stores the
  ghost values at the end of ghosted vec, so far I have been manually
  mapping back and forth between petsc and my application numbering.
  After profiling my code, it turned out that about 15% of run-time was
  just doing this mapping which is absolutely ridiculous!
 
  Anyway, to fix this now I set up an ISLocalToGlobalMapping object
  using my own application original local2global relation shown above.
  Then I create the petsc vec like this:
 
  // Create PetscVec
  // We do not care about the actual global index of ghost nodes at this
  point
  std::vectorPetscInt ghost_nodes(N - n -1, 0);
  ierr = VecCreateGhost(mpicomm, n+1, num_global, ghost_nodes.size(),
  (const PetscInt*)ghost_nodes[0], v); CHKERRQ(ierr);
 
 
  // Apply mapping
  ierr = VecSetLocalToGlobalMapping(v, mapping); CHKERRQ(ierr);
 
  After this point I do the usual VecGetArray on the vec and set the
  values, but this time without any extra mapping ... my very simple
  test seems to be ok. Is this the correct way of using
  ISLocalToGlobalMapping?
 
 
  Yes, this is the intention. And at a higher level, PETSc now tends to
  use a
  DM to
  organize this process. You tell the DM about your data layout (somewhat
  like
  giving
  the L2G mapping) and then you can DMGetLocalVector(),
  DMGetGlobalVector(),
  and
  DMLocalToGlobal().
 
Thanks,
 
   Matt
 
 
  I guess I'm suspicious because VecCreateGhost is supposed to
  internally set the mapping and it is supposed to position ghost nodes
  at the end of the array which I don't want it to do...
 
  Thanks and sorry about the long email!
 
 
 
 
  --
  What most experimenters take for granted before they begin their
  experiments
  is infinitely more interesting than any results to which their
  experiments
  lead.
  -- Norbert Wiener




 --
 What most experimenters take for granted before they begin their experiments
 is infinitely more interesting than any results to which their experiments
 lead.
 -- Norbert Wiener


[petsc-users] Scatter error

2013-10-01 Thread Mohammad Mirzadeh
Hi guys,

I have two ghosted vectors fxx, fyy which should store derivatives of
a quantity f. They are both obtained via calling VecDuplicate on f. To
minimize communications when computing both fxx and fyy I do the
following:

// loop over boundary points and compute fxx and fyy

VecGhostUpdateBegin(fxx, INSERT_VALUES, SCATTER_FORWARD);
VecGhostUpdateBegin(fyy, INSERT_VALUES, SCATTER_FORWARD);

// compute fxx and fyy for internal points

VecGhostUpdateEnd(fxx, INSERT_VALUES, SCATTER_FORWARD);
VecGhostUpdateEnd(fyy, INSERT_VALUES, SCATTER_FORWARD);

However this does not work and I get the following error:
[1]PETSC ERROR: - Error Message

[1]PETSC ERROR: Object is in wrong state!
[1]PETSC ERROR:  Scatter ctx already in use!
[1]PETSC ERROR:

[1]PETSC ERROR: Petsc Development HG revision:
d0a314355399bb0c38e384fde4b1feb01b721896  HG Date: Sat Nov 03 17:18:52
2012 -0500
[1]PETSC ERROR: See docs/changes/index.html for recent updates.
[1]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[1]PETSC ERROR: See docs/index.html for manual pages.
[1]PETSC ERROR:

[1]PETSC ERROR: ./derivatives on a arch-linu named mohammad-Desktop by
mohammad Tue Oct  1 15:47:38 2013
[1]PETSC ERROR: Libraries linked from
/home/mohammad/soft/petsc-dev/arch-linux2-cxx-debug/lib
[1]PETSC ERROR: Configure run at Sat Nov  3 15:26:57 2012
[1]PETSC ERROR: Configure options --download-blacs=1
--download-f-blas-lapack=1 --download-hdf5=1 --download-hypre=1
--download-metis=1 --download-mumps=1 --download-parmetis=1
--download-scalapack=1 --download-superlu_dist=1 --with-clanguage=cxx
--with-mpi-dir=/usr/bin --with-pthreadclasses=1
PETSC_ARCH=arch-linux2-cxx-debug
[1]PETSC ERROR:

[1]PETSC ERROR: VecScatterBegin() line 1546 in
/home/mohammad/soft/petsc-dev/src/vec/vec/utils/vscat.c
[1]PETSC ERROR: VecGhostUpdateBegin() line 235 in
/home/mohammad/soft/petsc-dev/src/vec/vec/impls/mpi/commonmpvec.c
[1]PETSC ERROR: constructing node neighboring information ... done in
0.0314 secs. on process 0 [Note: only showing root's timings]
Computing derivatives the new way ...
dxx_and_dyy_central() line 659 in
unknowndirectory/../../src/my_p4est_node_neighbors.cpp
terminate called after throwing an instance of 'PETSc::Exception'
  what():  std::exception
[mohammad-Desktop:19589] *** Process received signal ***
[mohammad-Desktop:19589] Signal: Aborted (6)
[mohammad-Desktop:19589] Signal code:  (-6)
[0]PETSC ERROR: - Error Message

[0]PETSC ERROR: Object is in wrong state!
[0]PETSC ERROR:  Scatter ctx already in use!
[0]PETSC ERROR:

[0]PETSC ERROR: Petsc Development HG revision:
d0a314355399bb0c38e384fde4b1feb01b721896  HG Date: Sat Nov 03 17:18:52
2012 -0500
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR:

[0]PETSC ERROR: ./derivatives on a arch-linu named mohammad-Desktop by
mohammad Tue Oct  1 15:47:38 2013
[0]PETSC ERROR: Libraries linked from
/home/mohammad/soft/petsc-dev/arch-linux2-cxx-debug/lib
[0]PETSC ERROR: Configure run at Sat Nov  3 15:26:57 2012
[0]PETSC ERROR: Configure options --download-blacs=1
--download-f-blas-lapack=1 --download-hdf5=1 --download-hypre=1
--download-metis=1 --download-mumps=1 --download-parmetis=1
--download-scalapack=1 --download-superlu_dist=1 --with-clanguage=cxx
--with-mpi-dir=/usr/bin --with-pthreadclasses=1
PETSC_ARCH=arch-linux2-cxx-debug
[0]PETSC ERROR:

[0]PETSC ERROR: VecScatterBegin() line 1546 in
/home/mohammad/soft/petsc-dev/src/vec/vec/utils/vscat.c
[0]PETSC ERROR: VecGhostUpdateBegin() line 235 in
/home/mohammad/soft/petsc-dev/src/vec/vec/impls/mpi/commonmpvec.c
[0]PETSC ERROR: dxx_and_dyy_central() line 659 in
unknowndirectory/../../src/my_p4est_node_neighbors.cpp
terminate called after throwing an instance of 'PETSc::Exception'
  what():  std::exception
[mohammad-Desktop:19588] *** Process received signal ***
[mohammad-Desktop:19588] Signal: Aborted (6)
[mohammad-Desktop:19588] Signal code:  (-6)
[mohammad-Desktop:19589] [ 0]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0) [0x7f12290f4cb0]
[mohammad-Desktop:19589] [ 1]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35) [0x7f1228a60425]
[mohammad-Desktop:19589] [ 2]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x17b) [0x7f1228a63b8b]
[mohammad-Desktop:19589] [ 3]
/usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x11d)
[0x7f122988069d]

Re: [petsc-users] Scatter error

2013-10-01 Thread Mohammad Mirzadeh
 It would be half the messages.  Why are these vectors separate in the
 first place?

I had not worked with block vectors before so just convenience i
guess! When working with blocked vectors, how are items stored? Is it
like fxx[0], fyy[0], fxx[1], fyy[1], ... or is it like
fxx[0],fxx[1],...,fyy[0],fyy[1],...


Re: [petsc-users] Scatter error

2013-10-01 Thread Mohammad Mirzadeh
Thanks Matt.

On Tue, Oct 1, 2013 at 5:26 PM, Matthew Knepley knep...@gmail.com wrote:
 On Tue, Oct 1, 2013 at 7:02 PM, Mohammad Mirzadeh mirza...@gmail.com
 wrote:

  It would be half the messages.  Why are these vectors separate in the
  first place?

 I had not worked with block vectors before so just convenience i
 guess! When working with blocked vectors, how are items stored? Is it
 like fxx[0], fyy[0], fxx[1], fyy[1], ... or is it like


 This one.

Matt


 fxx[0],fxx[1],...,fyy[0],fyy[1],...




 --
 What most experimenters take for granted before they begin their experiments
 is infinitely more interesting than any results to which their experiments
 lead.
 -- Norbert Wiener


[petsc-users] ISLocalToGlobalMapping + VecCreateGhost

2013-10-01 Thread Mohammad Mirzadeh
Hi guys,

I just did something by pure guessing which seems to work and I want
to make sure its the right thing!

I have a specific layout for my vectors that look like this

 --
| ghost values | local values | ghost values |
 --
0, 1, 2, ...m, m+1, ...   m+n, m+n+1 ... N

which means all nodes with index [0,m) and [m+n, N) are to be treated
as ghost and all intermediate ones as local. Since PETSc stores the
ghost values at the end of ghosted vec, so far I have been manually
mapping back and forth between petsc and my application numbering.
After profiling my code, it turned out that about 15% of run-time was
just doing this mapping which is absolutely ridiculous!

Anyway, to fix this now I set up an ISLocalToGlobalMapping object
using my own application original local2global relation shown above.
Then I create the petsc vec like this:

// Create PetscVec
// We do not care about the actual global index of ghost nodes at this point
std::vectorPetscInt ghost_nodes(N - n -1, 0);
ierr = VecCreateGhost(mpicomm, n+1, num_global, ghost_nodes.size(),
(const PetscInt*)ghost_nodes[0], v); CHKERRQ(ierr);


// Apply mapping
ierr = VecSetLocalToGlobalMapping(v, mapping); CHKERRQ(ierr);

After this point I do the usual VecGetArray on the vec and set the
values, but this time without any extra mapping ... my very simple
test seems to be ok. Is this the correct way of using
ISLocalToGlobalMapping?

I guess I'm suspicious because VecCreateGhost is supposed to
internally set the mapping and it is supposed to position ghost nodes
at the end of the array which I don't want it to do...

Thanks and sorry about the long email!


[petsc-users] ghost size of a ghosted vector

2013-09-20 Thread Mohammad Mirzadeh
Hi guys,

Is there any function that would return the ghost (or ghost+local
preferably) size of a ghosted vector? VecGetSize returns the total
global size and VecGetLocalSize returns only the local part.

I need this size for a error-checking guard i'm adding in my code.

Thanks,
Mohammad


Re: [petsc-users] ghost size of a ghosted vector

2013-09-20 Thread Mohammad Mirzadeh
sweet! Thanks Barry :)

On Fri, Sep 20, 2013 at 7:40 PM, Barry Smith bsm...@mcs.anl.gov wrote:

   You need to call VecGhostGetLocalForm()  and call VecGetSize() on that. 
 Don't worry, its cost is only the function calls.

Barry

 On Sep 20, 2013, at 9:34 PM, Mohammad Mirzadeh mirza...@gmail.com wrote:

 Hi guys,

 Is there any function that would return the ghost (or ghost+local
 preferably) size of a ghosted vector? VecGetSize returns the total
 global size and VecGetLocalSize returns only the local part.

 I need this size for a error-checking guard i'm adding in my code.

 Thanks,
 Mohammad



[petsc-users] VecGhost memory layout

2013-08-08 Thread Mohammad Mirzadeh
Hi guys,

I'm running into a bug that has made me question my understanding of
memory layout in VecGhost.

First, I remember reading somewhere before (in the manual or mailing
list which I cannot find now) that the way they are internally
organized is all local values followed by all ghost values. In other
words ghost values are ALWAYS padded to the end of the array. Is this
correct?

Second, when I want to access both local and ghosted values, I do the following,

VecGhostGetLocalForm(F, F_loc);
VecGetArray(F_loc, F_loc_ptr);
// do computation on F_loc_ptr
VecRestoreArray(F_loc, F_loc_ptr);
VecGhostRestoreLocalForm(F, F_loc);

here I assume that in accessing F_loc_ptr, all indecies from [0,
numLocal) are local values and the rest are ghost values. Once I'm
done, I need every processor to update its ghost region and I call

VecGhostUpdateBegin(F, INSERT_VALUES, SCATTER_FORWARD);
VecGhostUpdateEnd(F, INSERT_VALUES, SCATTER_FORWARD);

Is there any flaw in what I'm doing? Also, as a side question, if I
call VecGetArray directly on F (and not F_loc) do I get junk values?

Thanks,
M


Re: [petsc-users] implementation of multi-level grid in petsc

2013-08-08 Thread Mohammad Mirzadeh
How big of an application are you looking into? If you are thinking in
the range of couple of 10M grid points on couple of hundred
processors, then I'd say the simplest approach is to create grid in
serial and then use PETSc's interface to ParMetis to handle
partitioning. I did this with my quadtree code and could easily scale
quadtrees on the order of 16.5M grid points upto 75% on 256 processors
for a Poisson equation test.

If you are thinking way larger problem (think couple of 100M grids and
order several thousands processors), I could recommend p4est if you
want to do tree-based grids. In that case using deal.II interface will
be really beneficial as p4est alone is really a bare bone package. I
do not have enough experience with block-structured AMR so I cannot
comment on that.

On Thu, Aug 8, 2013 at 1:28 PM, Mark F. Adams mfad...@lbl.gov wrote:

 On Aug 8, 2013, at 3:32 PM, Roc Wang pengxw...@hotmail.com wrote:

 Thanks Mat,

 I tried Chombo for implementing AMR but not tried SAMRAI yet. Chombo can
 do AMR, but it seems the data structure is quite complicated for customizing
 usage.  What I want to do with petsc is to compose a simple home-made like
 blocked multi-level grid, though it is not automatically adaptive.  However,
 I don't have too much experiences on petsc. As of now, I suppose to use DM
 to manage the data for the big domain and all small sub-domains.  I am not
 sure whether it is a good idea.  So, any suggestions are appreciated very
 much.  Thanks again.


 As Matt said, this is not what you want to do, most likely.  Building AMR on
 DM/DA is a lot of work unless you have a simple application and have a clear
 idea of how to do it.  Chombo is flexible but it is complex and takes time
 to get started.  I'm not familiar wit SAMARI but I would guess it is like
 Chombo.  Deall.II might be worth looking into.  I'm not familiar.

 Best,




 
 Date: Thu, 8 Aug 2013 14:03:53 -0500
 Subject: Re: [petsc-users] implementation of multi-level grid in petsc
 From: knep...@gmail.com
 To: pengxw...@hotmail.com
 CC: petsc-users@mcs.anl.gov

 On Thu, Aug 8, 2013 at 1:29 PM, Roc Wang pengxw...@hotmail.com wrote:

 Hi,

 I am working on multi-level grid for Poisson equation.  I need to refine
 some sub-region in the computational domain. To this, I plan to build  some
 boxes (patches) based on the coarsest level. I am using DM to manage the
 data. I found there is a new function DMPatachCreate() in the version 3.4.
 Is this function the right one I should use for the refined region?  If it
 is not, which ones I should use?


 That is an experiment and does not work.


 My proposed approach is to start with  code
 dm/impls/patch/examples/tests/ex1.c. And then follow the code
 /dm/examples/tutorials/ex65dm.c. Is this approach the right way to my goal?

 In addition, I need to use not only the nodes but also the cells
 including nodes.  Should I use DMMesh to create the cells? I noticed DMMesh
 is mainly for unstructured grid, but I didn't find other class that
 implements structured cells.  Can anybody give me some suggestions on
 multi-level grid or let me know which examples I should start with? Thanks.


 No, that is not appropriate.

 It sounds like you want structured AMR. PETSc does not do this, and there
 are packages that do it.:

 a) Chombo

 b) SAMRAI

 which are both patch-based AMR. If you want octree-style AMR you could use
 p4est, but it would mean
 a lot of coding along the lines of http://arxiv.org/abs/1308.1472, or
 Deal.II which is a complete package.
 I think Deal is the closest to using PETSc solvers.

   Thanks,

  Matt

 --
 What most experimenters take for granted before they begin their experiments
 is infinitely more interesting than any results to which their experiments
 lead.
 -- Norbert Wiener




Re: [petsc-users] VecGhost memory layout

2013-08-08 Thread Mohammad Mirzadeh
Awesome. Thanks Barry for the quick response.

On Thu, Aug 8, 2013 at 1:42 PM, Barry Smith bsm...@mcs.anl.gov wrote:

 On Aug 8, 2013, at 2:56 PM, Mohammad Mirzadeh mirza...@gmail.com wrote:

 Hi guys,

 I'm running into a bug that has made me question my understanding of
 memory layout in VecGhost.

 First, I remember reading somewhere before (in the manual or mailing
 list which I cannot find now) that the way they are internally
 organized is all local values followed by all ghost values. In other
 words ghost values are ALWAYS padded to the end of the array. Is this
 correct?


Yes

 Second, when I want to access both local and ghosted values, I do the 
 following,

 VecGhostGetLocalForm(F, F_loc);
 VecGetArray(F_loc, F_loc_ptr);
 // do computation on F_loc_ptr
 VecRestoreArray(F_loc, F_loc_ptr);
 VecGhostRestoreLocalForm(F, F_loc);

 here I assume that in accessing F_loc_ptr, all indecies from [0,
 numLocal) are local values and the rest are ghost values. Once I'm
 done, I need every processor to update its ghost region and I call

Good

 VecGhostUpdateBegin(F, INSERT_VALUES, SCATTER_FORWARD);
 VecGhostUpdateEnd(F, INSERT_VALUES, SCATTER_FORWARD);

 Is there any flaw in what I'm doing?

No

 Also, as a side question, if I
 call VecGetArray directly on F (and not F_loc) do I get junk values?

 No, they actually share the same array so you will get the same values.

Barry


 Thanks,
 M



Re: [petsc-users] ViennaCL

2013-05-24 Thread Mohammad Mirzadeh
I see. Thanks for the info Karl.


On Fri, May 24, 2013 at 8:36 PM, Karl Rupp r...@mcs.anl.gov wrote:

 Hi,

 currently only the OpenCL backend is used in PETSc, as with CUDA there is
 already the cusp bindings and for threading there is threadcomm. Thus, you
 don't need to add any additional defines.

 Best regards,
 Karli



 On 05/24/2013 10:04 PM, Mohammad Mirzadeh wrote:

 sweet. I was in the middle of writing my own interfaces; i'll try those
 as well. Do I still get to use the VIENNACL_WITH_XYZ macros?


 On Fri, May 24, 2013 at 7:45 PM, Karl Rupp r...@mcs.anl.gov
 mailto:r...@mcs.anl.gov wrote:

 Hi Mohammad,

 there is a first interface to ViennaCL in the next-branch already.
 Configure with
--download-viennacl
--with-opencl-include=/path/__**to/OpenCL/includes
--with-opencl-lib=/path/to/__**libOpenCL.so

 and then use
-vec_type viennacl
-mat_type aijviennacl
 as runtime options.

 As this resides in next, it is still work in progress. If you
 encounter any problems during installation, please let us know.
 Also, OpenCL typically shows larger latency than CUDA, so you should
 have at least 100k unknowns to see any performance gain.

 Best regards,
 Karli



 On 05/24/2013 09:08 PM, Mohammad Mirzadeh wrote:

 Hi guys,

 Speaking of interfaces, is there any plan to provide interfaces to
 ViennaCL solvers?

 Tnx
 Mohammad







[petsc-users] Which IDE you use to develop petsc application

2012-11-16 Thread Mohammad Mirzadeh
I have successfully used Qt Creator with any C/C++ program I have developed
regardless of what external package I'm linking against. It works for my
projects with PETSc really nicely and you get all the benefits of
auto-compeletion, refactoring, and function/class lookups. Plus, you can
use the visual debugger which is just a front-end to gdb and works quite
nicely whether you are debugging serial, MPI, or threaded codes.

This strategy is perfect for local development where you later deploy the
code to a remote cluster for running. If you want to directly edit codes on
a remote cluster, you can mount the remote drive and just use the IDE as if
you were editing local files. The debugger won't work in this case
obviously (although there are supports for remote-debugging as well but I
have not tried them out). Finally it has got front-ends to valgrind for
both men-check and calgrind tools.

There is a little bit of information in the manual on this, but do not
hesitate to ask me questions if you were interested.


On Thu, Nov 15, 2012 at 5:03 PM, Barry Smith bsmith at mcs.anl.gov wrote:


Because we are developing PETSc as a library that must be portable to
 many users most people who develop the PETSc libraries do not use a IDE.

For someone who is writing AN APPLICATION that uses PETSc (especially
 an application with guis) it makes sense to use an IDL to do that
 development. What you chose should depend on what machine you develop on
 and what your are experienced with or comfortable with. In the users manual
 we try to provide some information on using PETSc from a variety of IDEs.
 We are interested in collecting information on using PETSc from IDEs and
 improving our documentation.


Barry


 On Nov 15, 2012, at 4:52 PM, Fande Kong fd.kong at siat.ac.cn wrote:

   Got it. Thanks.
  On Thu, Nov 15, 2012 at 3:49 PM, Matthew Knepley knepley at gmail.com
 wrote:
  On Thu, Nov 15, 2012 at 5:41 PM, Fande Kong fande.kong at colorado.edu
 wrote:
   Hi all,
  
   In order to improve development the efficient of the petsc application
   development, which IDE are you using?
 
  Emacs+gdb.
 
Matt
 
   Regards,
   --
   Fande Kong
   Department of Computer Science
   University of Colorado at Boulder
  
  
 
 
 
  --
  What most experimenters take for granted before they begin their
  experiments is infinitely more interesting than any results to which
  their experiments lead.
  -- Norbert Wiener
 
 
 
 
  --
  Fande Kong
  ShenZhen Institutes of Advanced Technology
  Chinese Academy of Sciences
 


-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20121116/bdd1cf5b/attachment.html


[petsc-users] thread model

2012-11-04 Thread Mohammad Mirzadeh

  As I said before you have to set the core affinity for OpenMP through
 environment variables.


Well I tried 'export GOMP_CPU_AFFINITY=0-7 ' before running the code but
that did not help. I'm using gcc. Am I doing it wrong.

i) Which example are you running? Please send output of -log_summary.


This was my own  code. Will try on some of the examples and report back.


 ii) Only the vector and a few matrix operations are threaded currently.
 There are no threaded preconditioners yet.


Aah! That may actually be it. I'm using hypre and I think that is not
threaded. Will try with -pc_type none to see if that makes any difference.
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20121104/2ade71b8/attachment.html


[petsc-users] thread model

2012-11-03 Thread Mohammad Mirzadeh
Hum ... That did not help. I found this -threadcomm_affinities
0,1,2,3,4,5,6,7 but that also did nothing. Is that option any relevant?

Another question: the linux box I'm testing on has 2 quad-cores (8
cores/2 sockets). If I run the code with mpirun -np 8, I get about 3X which
makes sense to me. However, If I run with either pthread (which seems to
use all cores) or openmp (which always defaults to 1 no matter what) I get
the same performance as serial. Does this mean there is something messed up
with the hardware and/or how mpi/openmp/pthread is set up?


On Fri, Nov 2, 2012 at 8:24 PM, Shri abhyshr at mcs.anl.gov wrote:


 On Nov 2, 2012, at 9:54 PM, Mohammad Mirzadeh wrote:

  Hi,
 
  To use the new thread model in PETSc, does it suffice to run the code
 with the following?
 
  -threadcomm_type openmp/pthread -threadcomm_nthreads #
 
  When I run the code with openmp, only 1 processor/core is active
 (looking at top). When using pthread, all cores are active. Am I missing
 something?
 OpenMP defaults to binding all threads to a single core if the cpu
 affinity is not specified explicitly. If you
 are using GNU OpenMP then you can bind the threads to specific cores using
 the environment variable GOMP_CPU_AFFINITY
 http://gcc.gnu.org/onlinedocs/libgomp/GOMP_005fCPU_005fAFFINITY.html
 If you are some other OpenMP implementation then check its manual to see
 how to set cpu affinity.

 Shri


-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20121103/3b9d5b67/attachment.html


[petsc-users] thread model

2012-11-02 Thread Mohammad Mirzadeh
Hi,

To use the new thread model in PETSc, does it suffice to run the code with
the following?

-threadcomm_type openmp/pthread -threadcomm_nthreads #

When I run the code with openmp, only 1 processor/core is active (looking
at top). When using pthread, all cores are active. Am I missing something?
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20121102/e2a7db17/attachment.html


[petsc-users] matrix reordering

2012-10-20 Thread Mohammad Mirzadeh
Awesome. Thanks for doing this Jed.

On Sat, Oct 20, 2012 at 5:38 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Fri, Oct 19, 2012 at 6:03 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Cool. Jed, could you please point me to an example/tutorial for this? I'm
 going to give it a shot.


 This works for SeqAIJ:

   ierr = MatGetOrdering(A,ordering,rowperm,colperm);CHKERRQ(ierr);
   ierr = MatPermute(A,rowperm,colperm,Aperm);CHKERRQ(ierr);
   ierr = VecPermute(b,colperm,PETSC_FALSE);CHKERRQ(ierr);

   ierr =
 KSPSetOperators(ksp,Aperm,Aperm,DIFFERENT_NONZERO_PATTERN);CHKERRQ(ierr);
   ierr = KSPSolve(ksp,b,x);CHKERRQ(ierr);
   ierr = VecPermute(x,rowperm,PETSC_TRUE);CHKERRQ(ierr);

 Now x is the solution back in the original ordering.


 There is an example in petsc-dev now.


 https://bitbucket.org/petsc/petsc-dev/src/tip/src/ksp/ksp/examples/tutorials/ex18.c

 It should work with petsc-3.3 in serial, but you need petsc-dev for
 parallel MatPermute.




 Thanks


 On Fri, Oct 19, 2012 at 3:57 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 _You_ can compute a MatOrdering, MatPermute and VecPermute, solve, and
 permute back. Profile and if the solve is faster, go ahead and lift the
 ordering code back into your mesh.


 On Fri, Oct 19, 2012 at 5:48 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Thanks Barry. I think I agree with you that right way to do it is
 through mesh setup. As a matter of fact I have done that for some of my
 problems. Yet, I can think of situations where that kind of support might
 be beneficial, especially if someone (a.k.a me in this case!) is looking
 for a 'dirty and fast' fix.

 Anyway, I still believe that in long term, one should do the
 partitioning in a pre-processing step anyways and so there may not be much
 incentive for adding such a support.

 Thanks again,
 Mohammad


 On Fri, Oct 19, 2012 at 3:00 PM, Barry Smith bsmith at mcs.anl.govwrote:


Mohammad,

  We've thought about adding this kind of support this over the
 years but always came to the conclusion that the right way to do it is
 during the mesh setup step.

Barry


 On Oct 19, 2012, at 4:16 PM, Mohammad Mirzadeh mirzadeh at gmail.com
 wrote:

  Hi guys,
 
  Quick question. *After* the matrix is setup, is it possible to ask
 PETSc to renumber the nodes internally in the KSPSolve phase to minimize
 communications for MatMul inside the solver? Can I use the MatPartitioning
 object for this or is that only intended to partition the mesh as a
 pre-processing step?
 
  Thanks






-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20121020/ee30dd89/attachment.html


[petsc-users] matrix reordering

2012-10-19 Thread Mohammad Mirzadeh
Hi guys,

Quick question. *After* the matrix is setup, is it possible to ask PETSc to
renumber the nodes internally in the KSPSolve phase to minimize
communications for MatMul inside the solver? Can I use the MatPartitioning
object for this or is that only intended to partition the mesh as a
pre-processing step?

Thanks
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20121019/a63d53e5/attachment.html


[petsc-users] matrix reordering

2012-10-19 Thread Mohammad Mirzadeh
Cool. Jed, could you please point me to an example/tutorial for this? I'm
going to give it a shot.

Thanks

On Fri, Oct 19, 2012 at 3:57 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 _You_ can compute a MatOrdering, MatPermute and VecPermute, solve, and
 permute back. Profile and if the solve is faster, go ahead and lift the
 ordering code back into your mesh.


 On Fri, Oct 19, 2012 at 5:48 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Thanks Barry. I think I agree with you that right way to do it is through
 mesh setup. As a matter of fact I have done that for some of my problems.
 Yet, I can think of situations where that kind of support might be
 beneficial, especially if someone (a.k.a me in this case!) is looking for a
 'dirty and fast' fix.

 Anyway, I still believe that in long term, one should do the partitioning
 in a pre-processing step anyways and so there may not be much incentive for
 adding such a support.

 Thanks again,
 Mohammad


 On Fri, Oct 19, 2012 at 3:00 PM, Barry Smith bsmith at mcs.anl.gov wrote:


Mohammad,

  We've thought about adding this kind of support this over the years
 but always came to the conclusion that the right way to do it is during
 the mesh setup step.

Barry


 On Oct 19, 2012, at 4:16 PM, Mohammad Mirzadeh mirzadeh at gmail.com
 wrote:

  Hi guys,
 
  Quick question. *After* the matrix is setup, is it possible to ask
 PETSc to renumber the nodes internally in the KSPSolve phase to minimize
 communications for MatMul inside the solver? Can I use the MatPartitioning
 object for this or is that only intended to partition the mesh as a
 pre-processing step?
 
  Thanks




-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20121019/1e9bd615/attachment.html


[petsc-users] missing libpetsc.a

2012-09-06 Thread Mohammad Mirzadeh
You need to link to all of the libraries. At the petsc top directory issue:

make PETSC_DIR=$PWD PETSC_ARCH=arch-linux2-c-debug getlinklibs

That'll give you all the libraries that need to be linked to the executable

On Thu, Sep 6, 2012 at 12:23 AM, 0sabio00 at gmail.com wrote:

 I think i'm having a trouble linking to the petsc library.

 gcc  main.cpp  -I /path to /petsc-3.3-p3/include -I /path
 to/petsc-3.3-p3/arch-linux2-c-debug/include
 -L/path to /petsc-3.3-p3/arch-linux2-c-debug/lib/ -lpetsc-o test

 I get an error = DMGetMatrix not declared in this scope.

 and I realized that I can't find libpetsc.a or libpetsc.so anywhere.
  Where is the petsc library exactly? if I find the location of the petsc
 library and link it would it fix the problem?

 Thanks

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120906/227f66b5/attachment.html


[petsc-users] Petsc crashes with intel compiler

2012-08-31 Thread Mohammad Mirzadeh
hi guys,

After doing many many tests across different machine (Linux box and a Mac)
and clusters (Gordon and Lonestar) I'm able to reproduce a test in which my
code crashes if I use a petsc version that is compiled with intel compiler.
This does not happen if I use gcc. When I use BiCGSTAB solver along with
hypre's BoomerAMG, I get the following error message:

[0]PETSC ERROR: - Error Message

[0]PETSC ERROR: Error in external library!
[0]PETSC ERROR: Error in HYPRE solver, error code 1!
[0]PETSC ERROR:

[0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 15:42:00
CDT 2012
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR:

[0]PETSC ERROR: ./2D_cylinders on a arch-linu named
c341-116.ls4.tacc.utexas.edu by mmirzade Fri Aug 31 18:13:15 2012
[0]PETSC ERROR: Libraries linked from
/work/02032/mmirzade/soft/petsc-3.3-p2/arch-linux-intel-debug/lib
[0]PETSC ERROR: Configure run at Fri Aug 31 17:22:42 2012
[0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-intel-debug
--with-clanguage=cxx --with-pthreadclasses=1
--with-mpi-dir=/opt/apps/intel11_1/mvapich2/1.6
--with-blas-lapack-dir=/opt/apps/intel/11.1/mkl --download-hypre=1
--download-hdf5=1 --download-metis=1 --download-parmetis=1
--download-superlu_dist=1
[0]PETSC ERROR:

[0]PETSC ERROR: PCApply_HYPRE() line 160 in src/ksp/pc/impls/hypre/hypre.c
[0]PETSC ERROR: PCApply() line 384 in src/ksp/pc/interface/precon.c
[0]PETSC ERROR: KSPInitialResidual() line 57 in
src/ksp/ksp/interface/itres.c
[0]PETSC ERROR: KSPSolve_BCGS() line 65 in src/ksp/ksp/impls/bcgs/bcgs.c
[0]PETSC ERROR: KSPSolve() line 446 in src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: solve() line 402 in
unknowndirectory//work/02032/mmirzade/codes/CASL/lib/algebra/petscLinearSolver.cpp
[0]PETSC ERROR: - Error Message

[0]PETSC ERROR: Error in external library!
[0]PETSC ERROR: Error in HYPRE_IJMatrixDestroy()!
[0]PETSC ERROR:

[0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 15:42:00
CDT 2012
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR:

[0]PETSC ERROR: ./2D_cylinders on a arch-linu named
c341-116.ls4.tacc.utexas.edu by mmirzade Fri Aug 31 18:13:15 2012
[0]PETSC ERROR: Libraries linked from
/work/02032/mmirzade/soft/petsc-3.3-p2/arch-linux-intel-debug/lib
[0]PETSC ERROR: Configure run at Fri Aug 31 17:22:42 2012
[0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-intel-debug
--with-clanguage=cxx --with-pthreadclasses=1
--with-mpi-dir=/opt/apps/intel11_1/mvapich2/1.6
--with-blas-lapack-dir=/opt/apps/intel/11.1/mkl --download-hypre=1
--download-hdf5=1 --download-metis=1 --download-parmetis=1
--download-superlu_dist=1
[0]PETSC ERROR:

[0]PETSC ERROR: PCDestroy_HYPRE() line 178 in src/ksp/pc/impls/hypre/hypre.c
[0]PETSC ERROR: PCDestroy() line 119 in src/ksp/pc/interface/precon.c
[0]PETSC ERROR: KSPDestroy() line 786 in src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: clear() line 438 in
unknowndirectory//work/02032/mmirzade/codes/CASL/lib/algebra/petscLinearSolver.cpp
[0]PETSC ERROR:

[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR:
or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory
corruption errors
[0]PETSC ERROR: likely location of problem given in stack below
[0]PETSC ERROR: -  Stack Frames

[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,
[0]PETSC ERROR:   INSTEAD the line number of the start of the function
[0]PETSC ERROR:   is given.
[0]PETSC ERROR: [0] VecDestroy line 574 src/vec/vec/interface/vector.c
[0]PETSC ERROR: [0] KSPReset_BCGS line 194 src/ksp/ksp/impls/bcgs/bcgs.c
[0]PETSC ERROR: [0] KSPReset line 729 src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: [0] KSPDestroy line 768 src/ksp/ksp/interface/itfunc.c
[0]PETSC ERROR: [0] PetscError line 343 src/sys/error/err.c
[0]PETSC ERROR: [0] HYPRE_IJMatrixDestroy 

[petsc-users] Petsc crashes with intel compiler

2012-08-31 Thread Mohammad Mirzadeh
I'll try it right away

On Fri, Aug 31, 2012 at 4:51 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 Can you remove pthreadclasses from your configure? If the problem is still
 there, it looks a bit like a misconfigured MPI. But get rid of the
 pthreadclasses first.


 On Fri, Aug 31, 2012 at 6:38 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 hi guys,

 After doing many many tests across different machine (Linux box and a
 Mac) and clusters (Gordon and Lonestar) I'm able to reproduce a test in
 which my code crashes if I use a petsc version that is compiled with intel
 compiler. This does not happen if I use gcc. When I use BiCGSTAB solver
 along with hypre's BoomerAMG, I get the following error message:

 [0]PETSC ERROR: - Error Message
 
 [0]PETSC ERROR: Error in external library!
 [0]PETSC ERROR: Error in HYPRE solver, error code 1!
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 15:42:00
 CDT 2012
 [0]PETSC ERROR: See docs/changes/index.html for recent updates.
 [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
 [0]PETSC ERROR: See docs/index.html for manual pages.
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: ./2D_cylinders on a arch-linu named
 c341-116.ls4.tacc.utexas.edu by mmirzade Fri Aug 31 18:13:15 2012
 [0]PETSC ERROR: Libraries linked from
 /work/02032/mmirzade/soft/petsc-3.3-p2/arch-linux-intel-debug/lib
 [0]PETSC ERROR: Configure run at Fri Aug 31 17:22:42 2012
 [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-intel-debug
 --with-clanguage=cxx --with-pthreadclasses=1
 --with-mpi-dir=/opt/apps/intel11_1/mvapich2/1.6
 --with-blas-lapack-dir=/opt/apps/intel/11.1/mkl --download-hypre=1
 --download-hdf5=1 --download-metis=1 --download-parmetis=1
 --download-superlu_dist=1
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: PCApply_HYPRE() line 160 in src/ksp/pc/impls/hypre/hypre.c
 [0]PETSC ERROR: PCApply() line 384 in src/ksp/pc/interface/precon.c
 [0]PETSC ERROR: KSPInitialResidual() line 57 in
 src/ksp/ksp/interface/itres.c
 [0]PETSC ERROR: KSPSolve_BCGS() line 65 in src/ksp/ksp/impls/bcgs/bcgs.c
 [0]PETSC ERROR: KSPSolve() line 446 in src/ksp/ksp/interface/itfunc.c
 [0]PETSC ERROR: solve() line 402 in
 unknowndirectory//work/02032/mmirzade/codes/CASL/lib/algebra/petscLinearSolver.cpp
 [0]PETSC ERROR: - Error Message
 
 [0]PETSC ERROR: Error in external library!
 [0]PETSC ERROR: Error in HYPRE_IJMatrixDestroy()!
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13 15:42:00
 CDT 2012
 [0]PETSC ERROR: See docs/changes/index.html for recent updates.
 [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
 [0]PETSC ERROR: See docs/index.html for manual pages.
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: ./2D_cylinders on a arch-linu named
 c341-116.ls4.tacc.utexas.edu by mmirzade Fri Aug 31 18:13:15 2012
 [0]PETSC ERROR: Libraries linked from
 /work/02032/mmirzade/soft/petsc-3.3-p2/arch-linux-intel-debug/lib
 [0]PETSC ERROR: Configure run at Fri Aug 31 17:22:42 2012
 [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-intel-debug
 --with-clanguage=cxx --with-pthreadclasses=1
 --with-mpi-dir=/opt/apps/intel11_1/mvapich2/1.6
 --with-blas-lapack-dir=/opt/apps/intel/11.1/mkl --download-hypre=1
 --download-hdf5=1 --download-metis=1 --download-parmetis=1
 --download-superlu_dist=1
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: PCDestroy_HYPRE() line 178 in
 src/ksp/pc/impls/hypre/hypre.c
 [0]PETSC ERROR: PCDestroy() line 119 in src/ksp/pc/interface/precon.c
 [0]PETSC ERROR: KSPDestroy() line 786 in src/ksp/ksp/interface/itfunc.c
 [0]PETSC ERROR: clear() line 438 in
 unknowndirectory//work/02032/mmirzade/codes/CASL/lib/algebra/petscLinearSolver.cpp
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
 probably memory access out of range
 [0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
 [0]PETSC ERROR: or see
 http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSCERROR: 
 or try
 http://valgrind.org on GNU/linux and Apple Mac OS X to find memory
 corruption errors
 [0]PETSC ERROR: likely location of problem given in stack below
 [0]PETSC ERROR: -  Stack Frames
 
 [0]PETSC ERROR: Note: The EXACT line numbers in the stack are not
 available,
 [0]PETSC ERROR:   INSTEAD the line number

[petsc-users] Petsc crashes with intel compiler

2012-08-31 Thread Mohammad Mirzadeh
Alright, just removed pthreadclasses and it did not help. The code still
crashes.

Matt, I'm gonna try MKL now; However, don't you think its strange that the
gcc version works even when I use MKL?

On Fri, Aug 31, 2012 at 5:12 PM, Matthew Knepley knepley at gmail.com wrote:

 On Fri, Aug 31, 2012 at 6:51 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 Can you remove pthreadclasses from your configure? If the problem is
 still there, it looks a bit like a misconfigured MPI. But get rid of the
 pthreadclasses first.


 After that, I would get rid of MKL in favor of --download-f-blas-lapack.
 My guess is that MKL is doing something
 crazy in the compiler.

 Matt


 On Fri, Aug 31, 2012 at 6:38 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 hi guys,

 After doing many many tests across different machine (Linux box and a
 Mac) and clusters (Gordon and Lonestar) I'm able to reproduce a test in
 which my code crashes if I use a petsc version that is compiled with intel
 compiler. This does not happen if I use gcc. When I use BiCGSTAB solver
 along with hypre's BoomerAMG, I get the following error message:

 [0]PETSC ERROR: - Error Message
 
 [0]PETSC ERROR: Error in external library!
 [0]PETSC ERROR: Error in HYPRE solver, error code 1!
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13
 15:42:00 CDT 2012
 [0]PETSC ERROR: See docs/changes/index.html for recent updates.
 [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
 [0]PETSC ERROR: See docs/index.html for manual pages.
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: ./2D_cylinders on a arch-linu named
 c341-116.ls4.tacc.utexas.edu by mmirzade Fri Aug 31 18:13:15 2012
 [0]PETSC ERROR: Libraries linked from
 /work/02032/mmirzade/soft/petsc-3.3-p2/arch-linux-intel-debug/lib
 [0]PETSC ERROR: Configure run at Fri Aug 31 17:22:42 2012
 [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-intel-debug
 --with-clanguage=cxx --with-pthreadclasses=1
 --with-mpi-dir=/opt/apps/intel11_1/mvapich2/1.6
 --with-blas-lapack-dir=/opt/apps/intel/11.1/mkl --download-hypre=1
 --download-hdf5=1 --download-metis=1 --download-parmetis=1
 --download-superlu_dist=1
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: PCApply_HYPRE() line 160 in
 src/ksp/pc/impls/hypre/hypre.c
 [0]PETSC ERROR: PCApply() line 384 in src/ksp/pc/interface/precon.c
 [0]PETSC ERROR: KSPInitialResidual() line 57 in
 src/ksp/ksp/interface/itres.c
 [0]PETSC ERROR: KSPSolve_BCGS() line 65 in src/ksp/ksp/impls/bcgs/bcgs.c
 [0]PETSC ERROR: KSPSolve() line 446 in src/ksp/ksp/interface/itfunc.c
 [0]PETSC ERROR: solve() line 402 in
 unknowndirectory//work/02032/mmirzade/codes/CASL/lib/algebra/petscLinearSolver.cpp
 [0]PETSC ERROR: - Error Message
 
 [0]PETSC ERROR: Error in external library!
 [0]PETSC ERROR: Error in HYPRE_IJMatrixDestroy()!
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13
 15:42:00 CDT 2012
 [0]PETSC ERROR: See docs/changes/index.html for recent updates.
 [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
 [0]PETSC ERROR: See docs/index.html for manual pages.
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: ./2D_cylinders on a arch-linu named
 c341-116.ls4.tacc.utexas.edu by mmirzade Fri Aug 31 18:13:15 2012
 [0]PETSC ERROR: Libraries linked from
 /work/02032/mmirzade/soft/petsc-3.3-p2/arch-linux-intel-debug/lib
 [0]PETSC ERROR: Configure run at Fri Aug 31 17:22:42 2012
 [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-intel-debug
 --with-clanguage=cxx --with-pthreadclasses=1
 --with-mpi-dir=/opt/apps/intel11_1/mvapich2/1.6
 --with-blas-lapack-dir=/opt/apps/intel/11.1/mkl --download-hypre=1
 --download-hdf5=1 --download-metis=1 --download-parmetis=1
 --download-superlu_dist=1
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: PCDestroy_HYPRE() line 178 in
 src/ksp/pc/impls/hypre/hypre.c
 [0]PETSC ERROR: PCDestroy() line 119 in src/ksp/pc/interface/precon.c
 [0]PETSC ERROR: KSPDestroy() line 786 in src/ksp/ksp/interface/itfunc.c
 [0]PETSC ERROR: clear() line 438 in
 unknowndirectory//work/02032/mmirzade/codes/CASL/lib/algebra/petscLinearSolver.cpp
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
 probably memory access out of range
 [0]PETSC ERROR: Try option -start_in_debugger or
 -on_error_attach_debugger
 [0]PETSC ERROR: or see
 http://www.mcs.anl.gov/petsc/documentation/faq.html

[petsc-users] Petsc crashes with intel compiler

2012-08-31 Thread Mohammad Mirzadeh
tried OpenMPI and still have the same problem!

Barry, unfortunately there are no other intel compilers and pgi ... well I
never really figured how to get that thing working!
Also, tried one of petsc examples and that one seems to work with petsc
builds using intel ... I'm really baffled here. One the one had petsc
examples work, on the other hand my code also works if I use gcc and is
valgrind clean. It could be a bug in intel compiler that does show itself
in petsc examples or it could be a bug in my code that does not show itself
under gcc; who knows!

For the moment I'm just gonna use gcc hoping my simulation is not gonna
crash next month! I hate memory issues ...

On Fri, Aug 31, 2012 at 7:27 PM, Barry Smith bsmith at mcs.anl.gov wrote:


   Try a different version of the Intel compiler; it is a good chance it is
 intel compiler bugs.

Barry

 On Aug 31, 2012, at 6:53 PM, Mohammad Mirzadeh mirzadeh at gmail.com wrote:

  I'll try it right away
 
  On Fri, Aug 31, 2012 at 4:51 PM, Jed Brown jedbrown at mcs.anl.gov wrote:
  Can you remove pthreadclasses from your configure? If the problem is
 still there, it looks a bit like a misconfigured MPI. But get rid of the
 pthreadclasses first.
 
 
  On Fri, Aug 31, 2012 at 6:38 PM, Mohammad Mirzadeh mirzadeh at gmail.com
 wrote:
  hi guys,
 
  After doing many many tests across different machine (Linux box and a
 Mac) and clusters (Gordon and Lonestar) I'm able to reproduce a test in
 which my code crashes if I use a petsc version that is compiled with intel
 compiler. This does not happen if I use gcc. When I use BiCGSTAB solver
 along with hypre's BoomerAMG, I get the following error message:
 
  [0]PETSC ERROR: - Error Message
 
  [0]PETSC ERROR: Error in external library!
  [0]PETSC ERROR: Error in HYPRE solver, error code 1!
  [0]PETSC ERROR:
 
  [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13
 15:42:00 CDT 2012
  [0]PETSC ERROR: See docs/changes/index.html for recent updates.
  [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
  [0]PETSC ERROR: See docs/index.html for manual pages.
  [0]PETSC ERROR:
 
  [0]PETSC ERROR: ./2D_cylinders on a arch-linu named
 c341-116.ls4.tacc.utexas.edu by mmirzade Fri Aug 31 18:13:15 2012
  [0]PETSC ERROR: Libraries linked from
 /work/02032/mmirzade/soft/petsc-3.3-p2/arch-linux-intel-debug/lib
  [0]PETSC ERROR: Configure run at Fri Aug 31 17:22:42 2012
  [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-intel-debug
 --with-clanguage=cxx --with-pthreadclasses=1
 --with-mpi-dir=/opt/apps/intel11_1/mvapich2/1.6
 --with-blas-lapack-dir=/opt/apps/intel/11.1/mkl --download-hypre=1
 --download-hdf5=1 --download-metis=1 --download-parmetis=1
 --download-superlu_dist=1
  [0]PETSC ERROR:
 
  [0]PETSC ERROR: PCApply_HYPRE() line 160 in
 src/ksp/pc/impls/hypre/hypre.c
  [0]PETSC ERROR: PCApply() line 384 in src/ksp/pc/interface/precon.c
  [0]PETSC ERROR: KSPInitialResidual() line 57 in
 src/ksp/ksp/interface/itres.c
  [0]PETSC ERROR: KSPSolve_BCGS() line 65 in src/ksp/ksp/impls/bcgs/bcgs.c
  [0]PETSC ERROR: KSPSolve() line 446 in src/ksp/ksp/interface/itfunc.c
  [0]PETSC ERROR: solve() line 402 in
 unknowndirectory//work/02032/mmirzade/codes/CASL/lib/algebra/petscLinearSolver.cpp
  [0]PETSC ERROR: - Error Message
 
  [0]PETSC ERROR: Error in external library!
  [0]PETSC ERROR: Error in HYPRE_IJMatrixDestroy()!
  [0]PETSC ERROR:
 
  [0]PETSC ERROR: Petsc Release Version 3.3.0, Patch 2, Fri Jul 13
 15:42:00 CDT 2012
  [0]PETSC ERROR: See docs/changes/index.html for recent updates.
  [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
  [0]PETSC ERROR: See docs/index.html for manual pages.
  [0]PETSC ERROR:
 
  [0]PETSC ERROR: ./2D_cylinders on a arch-linu named
 c341-116.ls4.tacc.utexas.edu by mmirzade Fri Aug 31 18:13:15 2012
  [0]PETSC ERROR: Libraries linked from
 /work/02032/mmirzade/soft/petsc-3.3-p2/arch-linux-intel-debug/lib
  [0]PETSC ERROR: Configure run at Fri Aug 31 17:22:42 2012
  [0]PETSC ERROR: Configure options PETSC_ARCH=arch-linux-intel-debug
 --with-clanguage=cxx --with-pthreadclasses=1
 --with-mpi-dir=/opt/apps/intel11_1/mvapich2/1.6
 --with-blas-lapack-dir=/opt/apps/intel/11.1/mkl --download-hypre=1
 --download-hdf5=1 --download-metis=1 --download-parmetis=1
 --download-superlu_dist=1
  [0]PETSC ERROR:
 
  [0]PETSC ERROR: PCDestroy_HYPRE() line 178 in
 src/ksp/pc/impls/hypre/hypre.c
  [0]PETSC ERROR: PCDestroy() line 119

[petsc-users] Regarding changing vector/matrix type at runtime

2012-08-23 Thread Mohammad Mirzadeh
Hi guys,

I've added a small function to only solve the linear system in parallel for
problems that are not big enough to justify parallelizing the whole thing
but would still take some time to solve in serial -- so far It's been
handy. To prevent extra copy from my own format to petsc's, I'm using
VecCreateMPIWithArray and MatCreateMPIWithSplitArray functions and they
work with MPI on very few processes (like 4 or so). I have a couple of
questions:

1) Is it worth considering using pThread and/or CUSP versions of the
matrices for such problems?
2) If so, what would be a better approach. Using hardwired vector types in
the code or using generic VecCreate functions to be able to change the type
at run-time?
3) Are there equivalent versions of XXXWithArray functions for CUSP and
pThread types? I don't seem to find them in manual page.

Thanks
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120823/87b96b51/attachment.html


[petsc-users] Regarding changing vector/matrix type at runtime

2012-08-23 Thread Mohammad Mirzadeh
 The CUSP case wouldn't help because chances are you are not holding
memory in CUSP format living on the device.
That makes sense.

Thanks for the help, Jed.

On Thu, Aug 23, 2012 at 11:40 AM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Thu, Aug 23, 2012 at 1:25 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Hi guys,

 I've added a small function to only solve the linear system in parallel
 for problems that are not big enough to justify parallelizing the whole
 thing but would still take some time to solve in serial -- so far It's been
 handy. To prevent extra copy from my own format to petsc's, I'm using
 VecCreateMPIWithArray and MatCreateMPIWithSplitArray functions and they
 work with MPI on very few processes (like 4 or so). I have a couple of
 questions:

 1) Is it worth considering using pThread and/or CUSP versions of the
 matrices for such problems?


 The old pthread types are being removed in favor of threading support for
 normal formats.


 2) If so, what would be a better approach. Using hardwired vector types
 in the code or using generic VecCreate functions to be able to change the
 type at run-time?


 Just use VecSetType()/MatSetType() to switch. You should really profile
 just using MatSetValues() to assemble. It's very likely that you are
 needlessly complicating your code by trying to skip a copy.


 3) Are there equivalent versions of XXXWithArray functions for CUSP and
 pThread types? I don't seem to find them in manual page.


 The CUSP case wouldn't help because chances are you are not holding memory
 in CUSP format living on the device.


-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120823/cfbb2720/attachment.html


[petsc-users] issue with different versions of petsc-dev

2012-08-17 Thread Mohammad Mirzadeh
You mean columns within a row? Nope.

On Fri, Aug 17, 2012 at 12:18 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Fri, Aug 17, 2012 at 2:11 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Hi guys,

 I just realize my code breaks with new versions of petsc-dev. I use
 MatCreateMPIAIJWithSplitArrays() funtions and it works with this verison:

 Petsc Development HG revision: b27fc9457bdc002c5a5de2b07797ae1a93094370
  HG Date: Thu May 03 13:42:15 2012 -0500

 but breaks with:

 Petsc Development HG revision: f9c6cac2d69c724a2258d4e0dc2f51b0825aa875
  HG Date: Thu Aug 16 08:37:18 2012 -0700

 Here's the error code:
 [0]PETSC ERROR: - Error Message
 
 [0]PETSC ERROR: Argument out of range!
 [0]PETSC ERROR: Column entry number 1 (actual colum 2) in row 3 is not
 sorted!


 Are your rows sorted?


  [0]PETSC ERROR:
 
 [0]PETSC ERROR: Petsc Development HG revision:
 f9c6cac2d69c724a2258d4e0dc2f51b0825aa875  HG Date: Thu Aug 16 08:37:18 2012
 -0700
 [0]PETSC ERROR: See docs/changes/index.html for recent updates.
 [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
 [0]PETSC ERROR: See docs/index.html for manual pages.
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: ./FVM on a arch-linu named mohammad-desktop by mohammad
 Fri Aug 17 12:06:15 2012
 [0]PETSC ERROR: Libraries linked from
 /home/mohammad/soft/petsc-dev_new/petsc-dev/arch-linux2-cxx-debug/lib
 [0]PETSC ERROR: Configure run at Fri Aug 17 11:45:31 2012
 [0]PETSC ERROR: Configure options --with-clanguage=cxx
 --with-mpi-dir=/usr/bin --download-f-blas-lapack=1 --download-hypre=1
 --download-parmetis=1 --download-metis=1 --download-superlu_dist=1
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: MatCreateSeqAIJWithArrays() line 4268 in
 /home/mohammad/soft/petsc-dev_new/petsc-dev/src/mat/impls/aij/seq/aij.c
 [0]PETSC ERROR: MatCreateMPIAIJWithSplitArrays() line 5731 in
 /home/mohammad/soft/petsc-dev_new/petsc-dev/src/mat/impls/aij/mpi/mpiaij.c
 [0]PETSC ERROR: setLinearSystemMatrix() line 84 in
 unknowndirectory/../../../CASL/lib/algebra/petscLinearSolver.cpp

 Is there anything changed regarding numbering of diagonal and
 off-diagonal entries when using the said function between these two
 versions?

 Thanks,
 Mohammad



-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120817/ac39d745/attachment.html


[petsc-users] issue with different versions of petsc-dev

2012-08-17 Thread Mohammad Mirzadeh
Thanks -- will do. Just curious, what's the reason for sorting?

On Fri, Aug 17, 2012 at 12:26 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Fri, Aug 17, 2012 at 2:20 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 You mean columns within a row? Nope.


 Okay, they are supposed to be. You can loop over your rows calling
 PetscSortInt() on each row separately.

  You didn't get the error before because debugging wasn't on, but you were
 also very lucky because many matrix kernels would have behaved incorrectly
 with unsorted data.




 On Fri, Aug 17, 2012 at 12:18 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Fri, Aug 17, 2012 at 2:11 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Hi guys,

 I just realize my code breaks with new versions of petsc-dev. I use
 MatCreateMPIAIJWithSplitArrays() funtions and it works with this verison:

 Petsc Development HG revision: b27fc9457bdc002c5a5de2b07797ae1a93094370
  HG Date: Thu May 03 13:42:15 2012 -0500

 but breaks with:

 Petsc Development HG revision: f9c6cac2d69c724a2258d4e0dc2f51b0825aa875
  HG Date: Thu Aug 16 08:37:18 2012 -0700

 Here's the error code:
 [0]PETSC ERROR: - Error Message
 
 [0]PETSC ERROR: Argument out of range!
 [0]PETSC ERROR: Column entry number 1 (actual colum 2) in row 3 is not
 sorted!


 Are your rows sorted?


  [0]PETSC ERROR:
 
 [0]PETSC ERROR: Petsc Development HG revision:
 f9c6cac2d69c724a2258d4e0dc2f51b0825aa875  HG Date: Thu Aug 16 08:37:18 2012
 -0700
 [0]PETSC ERROR: See docs/changes/index.html for recent updates.
 [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
 [0]PETSC ERROR: See docs/index.html for manual pages.
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: ./FVM on a arch-linu named mohammad-desktop by mohammad
 Fri Aug 17 12:06:15 2012
 [0]PETSC ERROR: Libraries linked from
 /home/mohammad/soft/petsc-dev_new/petsc-dev/arch-linux2-cxx-debug/lib
 [0]PETSC ERROR: Configure run at Fri Aug 17 11:45:31 2012
 [0]PETSC ERROR: Configure options --with-clanguage=cxx
 --with-mpi-dir=/usr/bin --download-f-blas-lapack=1 --download-hypre=1
 --download-parmetis=1 --download-metis=1 --download-superlu_dist=1
 [0]PETSC ERROR:
 
 [0]PETSC ERROR: MatCreateSeqAIJWithArrays() line 4268 in
 /home/mohammad/soft/petsc-dev_new/petsc-dev/src/mat/impls/aij/seq/aij.c
 [0]PETSC ERROR: MatCreateMPIAIJWithSplitArrays() line 5731 in
 /home/mohammad/soft/petsc-dev_new/petsc-dev/src/mat/impls/aij/mpi/mpiaij.c
 [0]PETSC ERROR: setLinearSystemMatrix() line 84 in
 unknowndirectory/../../../CASL/lib/algebra/petscLinearSolver.cpp

 Is there anything changed regarding numbering of diagonal and
 off-diagonal entries when using the said function between these two
 versions?

 Thanks,
 Mohammad





-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120817/58b510da/attachment.html


[petsc-users] MatSetValues: reorder the values

2012-05-27 Thread Mohammad Mirzadeh
 Of course, this new function sounds unnecessary because reordering v is 
 equivalent to reordering
 idxm[] and idxn[].  But the second method seems not an easy task to me, is it?


Why not use a single AO object to hold the mapping between PETSc mapping
and your maping? Seems to me that the indecies used in v[]
are completely arbitrary; they should always go in [0,1,...,m*n], don't
they?
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120527/3c6e429a/attachment.html


[petsc-users] petsc example with known scalability

2012-05-18 Thread Mohammad Mirzadeh
Hi guys,

I'm trying to generate scalability plots for my code and do profiling and
fine tuning. In doing so I have noticed that some of the factors affecting
my results are sort of subtle. For example, I figured, the other day, that
using all of the cores on a single node is somewhat (50-60%) slower when
compared to using only half of the cores which I suspect is due to memory
bandwidth and/or other hardware-related issues.

So I thought to ask and see if there is any example in petsc that has been
tested for scalability and has been documented? Basically I want to use
this test example as a benchmark to compare my results with. My own test
code is currently a linear Poisson solver on an adaptive quadtree grid and
involves non-trivial geometry (well basically a circle for the boundary but
still not a simple box).

Thanks,
Mohammad
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120518/1abcfb44/attachment.htm


[petsc-users] petsc example with known scalability

2012-05-18 Thread Mohammad Mirzadeh
I see; well that's a fair point. So i have my timing results obtained via
-log_summary; what should I be looking into for MatMult? Should I be
looking at wall timings? Or do I need to look into MFlops/s? I'm sorry but
I'm not sure what measure I should be looking into to determine scalability.

Also, is there any general meaningful advice one could give? in terms of
using the resources, compiler flags (beyond -O3), etc?

Thanks,
Mohammad

On Fri, May 18, 2012 at 4:18 PM, Matthew Knepley knepley at gmail.com wrote:

 On Fri, May 18, 2012 at 7:06 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Hi guys,

 I'm trying to generate scalability plots for my code and do profiling and
 fine tuning. In doing so I have noticed that some of the factors affecting
 my results are sort of subtle. For example, I figured, the other day, that
 using all of the cores on a single node is somewhat (50-60%) slower when
 compared to using only half of the cores which I suspect is due to memory
 bandwidth and/or other hardware-related issues.

 So I thought to ask and see if there is any example in petsc that has
 been tested for scalability and has been documented? Basically I want to
 use this test example as a benchmark to compare my results with. My own
 test code is currently a linear Poisson solver on an adaptive quadtree grid
 and involves non-trivial geometry (well basically a circle for the boundary
 but still not a simple box).


 Unfortunately, I do not even know what that means. We can't guarantee a
 certain level of performance because it not
 only depends on the hardware, but how you use it (as evident in your
 case). In a perfect world, we would have an abstract
 model of the computation (available for MatMult) and your machine (not
 available anywhere) and we would automatically
 work out the consequences and tell you what to expect. Instead today, we
 tell you to look at a few key indicators like the
 MatMult event, to see what is going on. When MatMult stops scaling, you
 have run out of bandwidth.

 Matt


 Thanks,
 Mohammad




 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120518/b1697b93/attachment.htm


[petsc-users] petsc example with known scalability

2012-05-18 Thread Mohammad Mirzadeh
Yes. Its fairly flat which I think is due to BoomerAMG. [8, 8, 9, 11, 12]

On Fri, May 18, 2012 at 5:06 PM, Matthew Knepley knepley at gmail.com wrote:

 On Fri, May 18, 2012 at 8:02 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Yes, I'm looking at weak scalability right now. I'm using BiCGSTAB with
 BoomerAMG (all default options except for rtol = 1e-12). I've not looked
 into MF/s yet but I'll surely do to see if I'm having any problem there. So
 far, just timing the KSPSolve, I get [0.231, 0.238, 0.296, 0.451, 0.599]
 seconds/KSP iteration for p=[1, 4, 16, 64, 256] with almost 93K nodes
 (matrix-row) per proc. Which is not bad I guess but still increased by a
 factor of 3 for 256 proc. Problem is, I don't know how good/bad this is. In
 fact I'm not even sure that is a valid question to ask since it may be very
 problem dependent.

 Something I just though about, how crucial is the matrix structure for
 KSP solvers? The nodes have bad numbering and I do partitioning to get a
 better one here.


  Did you look at the number of iterates?

Matt



 On Fri, May 18, 2012 at 4:47 PM, Matthew Knepley knepley at gmail.comwrote:

 On Fri, May 18, 2012 at 7:43 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 I see; well that's a fair point. So i have my timing results obtained
 via -log_summary; what should I be looking into for MatMult? Should I be
 looking at wall timings? Or do I need to look into MFlops/s? I'm sorry but
 I'm not sure what measure I should be looking into to determine 
 scalability.


 Time is only meaningful in isolation if I know how big your matrix is,
 but you obviously take the ratio to look how it is scaling. I am
 assuming you are looking at weak scalability so it should remain
 constant. MF/s will let you know how the routine is performing
 independent of size, and thus is an easy way to see what is happening.
 It should scale like P, and when that drops off you have
 insufficient bandwidth. VecMDot is a good way to look at the latency of
 reductions (assuming you use GMRES). There is indeed no
 good guide to this. Barry should write one.

 Matt


  Also, is there any general meaningful advice one could give? in terms
 of using the resources, compiler flags (beyond -O3), etc?

 Thanks,
 Mohammad

 On Fri, May 18, 2012 at 4:18 PM, Matthew Knepley knepley at 
 gmail.comwrote:

 On Fri, May 18, 2012 at 7:06 PM, Mohammad Mirzadeh mirzadeh at gmail.com
  wrote:

 Hi guys,

 I'm trying to generate scalability plots for my code and do profiling
 and fine tuning. In doing so I have noticed that some of the factors
 affecting my results are sort of subtle. For example, I figured, the 
 other
 day, that using all of the cores on a single node is somewhat (50-60%)
 slower when compared to using only half of the cores which I suspect is 
 due
 to memory bandwidth and/or other hardware-related issues.

 So I thought to ask and see if there is any example in petsc that has
 been tested for scalability and has been documented? Basically I want to
 use this test example as a benchmark to compare my results with. My own
 test code is currently a linear Poisson solver on an adaptive quadtree 
 grid
 and involves non-trivial geometry (well basically a circle for the 
 boundary
 but still not a simple box).


 Unfortunately, I do not even know what that means. We can't guarantee
 a certain level of performance because it not
 only depends on the hardware, but how you use it (as evident in your
 case). In a perfect world, we would have an abstract
 model of the computation (available for MatMult) and your machine (not
 available anywhere) and we would automatically
 work out the consequences and tell you what to expect. Instead today,
 we tell you to look at a few key indicators like the
 MatMult event, to see what is going on. When MatMult stops scaling,
 you have run out of bandwidth.

 Matt


 Thanks,
 Mohammad




 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener





 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener





 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120518/d4223442/attachment-0001.htm


[petsc-users] seg fault with VecGetArray

2012-05-13 Thread Mohammad Mirzadeh
On Sat, May 12, 2012 at 3:16 AM, Matthew Knepley knepley at gmail.com wrote:

 On Sat, May 12, 2012 at 4:33 AM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Hi guys,

 I'm having a really weird issue here! My code seg faults for certain
 problem size and after using gdb I have been able to pinpoint the problem
 to a VecGetArray call. Here's a series of things I have tried so far

 1) -on_error_attach_debugger - unsuccessful; does not launch
 debugger
 2) -start_in_debugger --- unsuccessful; does not start debugger


 Run with -log_summary. It will tell you what options the program got.
 Also, are there errors relating to X? Send
 all output to petsc-maint at mcs.anl.gov



Matt, -log_summary also does not generate any output! I was eventually able
to start_in_debugger using xterm. Previously I was trying to start in kdbg.
Even with xterm, -on_error_attach_debugger does not start the debugger. In
either case, starting the debugger in xterm using -start_in_debugger or
attaching the debugger myself manually, I get a segfault at VecGetArray and
then the program terminates without any further output.


 3) attaching debugger myself - code runs in debugger and seg faults
 when calling VecGetArray


 Is this a debug build? What dereference is causing the SEGV? Is the Vec a
 valid object? It sounds like
 it has been corrupted.



Yes; with the -g option. How can I check if Vec is valid?

 4) using ierr=VecGetArray;CHKERRQ(ierr) -- PETSc does not produce
 error messages; the code simply seg faults and terminates
 5) checking the values of ierr inside the debugger - They are
 all 0 up untill the code terminates; I think this means petsc does not
 generate error?
 6) checking for memory leak with valgrind --- All I get are
 leaks from OpenMPI and PetscInitialize and PetscFinalize; I think these are
 just routine and safe?



Should I attach the whole valgrind output here or send it to petsc-maint? I
just repeast these two a couple of times!:

==4508== 320 (288 direct, 32 indirect) bytes in 1 blocks are definitely
lost in loss record 2,644 of 2,665
==4508==at 0x4C2815C: malloc (vg_replace_malloc.c:236)
==4508==by 0x86417ED: ???
==4508==by 0x5D4D099: orte_rml_base_comm_start (in
/usr/lib/openmpi/lib/libopen-rte.so.0.0.0)
==4508==by 0x8640AD1: ???
==4508==by 0x5D3AFE6: orte_ess_base_app_setup (in
/usr/lib/openmpi/lib/libopen-rte.so.0.0.0)
==4508==by 0x8846E41: ???
==4508==by 0x5D23A52: orte_init (in
/usr/lib/openmpi/lib/libopen-rte.so.0.0.0)
==4508==by 0x5A9E806: ??? (in /usr/lib/openmpi/lib/libmpi.so.0.0.1)
==4508==by 0x5ABFD7F: PMPI_Init (in
/usr/lib/openmpi/lib/libmpi.so.0.0.1)
==4508==by 0x530A90: PetscInitialize(int*, char***, char const*, char
const*) (pinit.c:668)
==4508==by 0x4A4955: PetscSession::PetscSession(int*, char***, char
const*, char const*) (utilities.h:17)
==4508==by 0x4A1DA5: main (main_Test2.cpp:49)

==4508== 74 bytes in 1 blocks are definitely lost in loss record 2,411 of
2,665
==4508==at 0x4C2815C: malloc (vg_replace_malloc.c:236)
==4508==by 0x6F2DDA1: strdup (strdup.c:43)
==4508==by 0x5F85117: ??? (in /usr/lib/openmpi/lib/libopen-pal.so.0.0.0)
==4508==by 0x5F85359: mca_base_param_lookup_string (in
/usr/lib/openmpi/lib/libopen-pal.so.0.0.0)
==4508==by 0xB301869: ???
==4508==by 0xB2F5126: ???
==4508==by 0x5F82E17: mca_base_components_open (in
/usr/lib/openmpi/lib/libopen-pal.so.0.0.0)
==4508==by 0x5ADA6BA: mca_btl_base_open (in
/usr/lib/openmpi/lib/libmpi.so.0.0.1)
==4508==by 0xA6A9B93: ???
==4508==by 0x5F82E17: mca_base_components_open (in
/usr/lib/openmpi/lib/libopen-pal.so.0.0.0)
==4508==by 0x5AE3C88: mca_pml_base_open (in
/usr/lib/openmpi/lib/libmpi.so.0.0.1)
==4508==by 0x5A9E9E0: ??? (in /usr/lib/openmpi/lib/libmpi.so.0.0.1)


but eventually I get:

==4508== LEAK SUMMARY:
==4508==definitely lost: 5,949 bytes in 55 blocks
==4508==indirectly lost: 3,562 bytes in 32 blocks
==4508==  possibly lost: 0 bytes in 0 blocks
==4508==still reachable: 181,516 bytes in 2,660 blocks
==4508== suppressed: 0 bytes in 0 blocks
==4508== Reachable blocks (those to which a pointer was found) are not
shown.
==4508== To see them, rerun with: --leak-check=full --show-reachable=y

which seems considerable!


  How can we say anything without the valgrind output?

 Matt



 What else can I try to find the problem? Any recommendation is really
 appreciated!

 Thanks,
 Mohammad




 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120513/afd9e11a/attachment.htm


[petsc-users] seg fault with VecGetArray

2012-05-13 Thread Mohammad Mirzadeh
on  a somewhat related topic, I recently found I have the CHKERRXX option
which throws exceptions. This seems to be a better option for me since in
some some functions with return values I cannot use CHKERRQ/V. If I use
this, do I need to catch the exception in myself or is it caught by petsc?

On Sun, May 13, 2012 at 4:33 PM, Mohammad Mirzadeh mirzadeh at gmail.comwrote:



 On Sat, May 12, 2012 at 3:16 AM, Matthew Knepley knepley at gmail.comwrote:

 On Sat, May 12, 2012 at 4:33 AM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Hi guys,

 I'm having a really weird issue here! My code seg faults for certain
 problem size and after using gdb I have been able to pinpoint the problem
 to a VecGetArray call. Here's a series of things I have tried so far

 1) -on_error_attach_debugger - unsuccessful; does not launch
 debugger
 2) -start_in_debugger --- unsuccessful; does not start debugger


 Run with -log_summary. It will tell you what options the program got.
 Also, are there errors relating to X? Send
 all output to petsc-maint at mcs.anl.gov



 Matt, -log_summary also does not generate any output! I was eventually
 able to start_in_debugger using xterm. Previously I was trying to start in
 kdbg. Even with xterm, -on_error_attach_debugger does not start the
 debugger. In either case, starting the debugger in xterm using
 -start_in_debugger or attaching the debugger myself manually, I get a
 segfault at VecGetArray and then the program terminates without any further
 output.


 3) attaching debugger myself - code runs in debugger and seg faults
 when calling VecGetArray


 Is this a debug build? What dereference is causing the SEGV? Is the Vec a
 valid object? It sounds like
 it has been corrupted.



 Yes; with the -g option. How can I check if Vec is valid?

  4) using ierr=VecGetArray;CHKERRQ(ierr) -- PETSc does not produce
 error messages; the code simply seg faults and terminates
 5) checking the values of ierr inside the debugger - They are
 all 0 up untill the code terminates; I think this means petsc does not
 generate error?
 6) checking for memory leak with valgrind --- All I get are
 leaks from OpenMPI and PetscInitialize and PetscFinalize; I think these are
 just routine and safe?



 Should I attach the whole valgrind output here or send it to petsc-maint?
 I just repeast these two a couple of times!:

 ==4508== 320 (288 direct, 32 indirect) bytes in 1 blocks are definitely
 lost in loss record 2,644 of 2,665
 ==4508==at 0x4C2815C: malloc (vg_replace_malloc.c:236)
 ==4508==by 0x86417ED: ???
 ==4508==by 0x5D4D099: orte_rml_base_comm_start (in
 /usr/lib/openmpi/lib/libopen-rte.so.0.0.0)
 ==4508==by 0x8640AD1: ???
 ==4508==by 0x5D3AFE6: orte_ess_base_app_setup (in
 /usr/lib/openmpi/lib/libopen-rte.so.0.0.0)
 ==4508==by 0x8846E41: ???
 ==4508==by 0x5D23A52: orte_init (in
 /usr/lib/openmpi/lib/libopen-rte.so.0.0.0)
 ==4508==by 0x5A9E806: ??? (in /usr/lib/openmpi/lib/libmpi.so.0.0.1)
 ==4508==by 0x5ABFD7F: PMPI_Init (in
 /usr/lib/openmpi/lib/libmpi.so.0.0.1)
 ==4508==by 0x530A90: PetscInitialize(int*, char***, char const*, char
 const*) (pinit.c:668)
 ==4508==by 0x4A4955: PetscSession::PetscSession(int*, char***, char
 const*, char const*) (utilities.h:17)
 ==4508==by 0x4A1DA5: main (main_Test2.cpp:49)

 ==4508== 74 bytes in 1 blocks are definitely lost in loss record 2,411 of
 2,665
 ==4508==at 0x4C2815C: malloc (vg_replace_malloc.c:236)
 ==4508==by 0x6F2DDA1: strdup (strdup.c:43)
 ==4508==by 0x5F85117: ??? (in
 /usr/lib/openmpi/lib/libopen-pal.so.0.0.0)
 ==4508==by 0x5F85359: mca_base_param_lookup_string (in
 /usr/lib/openmpi/lib/libopen-pal.so.0.0.0)
 ==4508==by 0xB301869: ???
 ==4508==by 0xB2F5126: ???
 ==4508==by 0x5F82E17: mca_base_components_open (in
 /usr/lib/openmpi/lib/libopen-pal.so.0.0.0)
 ==4508==by 0x5ADA6BA: mca_btl_base_open (in
 /usr/lib/openmpi/lib/libmpi.so.0.0.1)
 ==4508==by 0xA6A9B93: ???
 ==4508==by 0x5F82E17: mca_base_components_open (in
 /usr/lib/openmpi/lib/libopen-pal.so.0.0.0)
 ==4508==by 0x5AE3C88: mca_pml_base_open (in
 /usr/lib/openmpi/lib/libmpi.so.0.0.1)
 ==4508==by 0x5A9E9E0: ??? (in /usr/lib/openmpi/lib/libmpi.so.0.0.1)


 but eventually I get:

 ==4508== LEAK SUMMARY:
 ==4508==definitely lost: 5,949 bytes in 55 blocks
 ==4508==indirectly lost: 3,562 bytes in 32 blocks
 ==4508==  possibly lost: 0 bytes in 0 blocks
 ==4508==still reachable: 181,516 bytes in 2,660 blocks
 ==4508== suppressed: 0 bytes in 0 blocks
 ==4508== Reachable blocks (those to which a pointer was found) are not
 shown.
 ==4508== To see them, rerun with: --leak-check=full --show-reachable=y

 which seems considerable!


  How can we say anything without the valgrind output?

 Matt



 What else can I try to find the problem? Any recommendation is really
 appreciated!

 Thanks,
 Mohammad




 --
 What most experimenters take for granted

[petsc-users] seg fault with VecGetArray

2012-05-13 Thread Mohammad Mirzadeh
On Sun, May 13, 2012 at 5:30 PM, Matthew Knepley knepley at gmail.com wrote:

 On Sun, May 13, 2012 at 7:33 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:



 On Sat, May 12, 2012 at 3:16 AM, Matthew Knepley knepley at gmail.comwrote:

 On Sat, May 12, 2012 at 4:33 AM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Hi guys,

 I'm having a really weird issue here! My code seg faults for certain
 problem size and after using gdb I have been able to pinpoint the problem
 to a VecGetArray call. Here's a series of things I have tried so far

 1) -on_error_attach_debugger - unsuccessful; does not launch
 debugger
 2) -start_in_debugger --- unsuccessful; does not start debugger


 Run with -log_summary. It will tell you what options the program got.
 Also, are there errors relating to X? Send
 all output to petsc-maint at mcs.anl.gov



 Matt, -log_summary also does not generate any output! I was eventually
 able to start_in_debugger using xterm. Previously I was trying to start in
 kdbg. Even with xterm, -on_error_attach_debugger does not start the
 debugger. In either case, starting the debugger in xterm using
 -start_in_debugger or attaching the debugger myself manually, I get a
 segfault at VecGetArray and then the program terminates without any further
 output.


 You SEGV before it can evidently.



sigh ... Just found it. You are right! I was, and I don't know why!,
allocating a large array on the stack and for certain size I was having a
stack overflow problem. Thanks for the help anyways.


  3) attaching debugger myself - code runs in debugger and seg faults
 when calling VecGetArray


 Is this a debug build? What dereference is causing the SEGV? Is the Vec
 a valid object? It sounds like
 it has been corrupted.



 Yes; with the -g option. How can I check if Vec is valid?


 PetscValidHeaderSpecific(v,VEC_CLASSID,1);


  4) using ierr=VecGetArray;CHKERRQ(ierr) -- PETSc does not produce
 error messages; the code simply seg faults and terminates
 5) checking the values of ierr inside the debugger - They are
 all 0 up untill the code terminates; I think this means petsc does not
 generate error?
 6) checking for memory leak with valgrind --- All I get are
 leaks from OpenMPI and PetscInitialize and PetscFinalize; I think these are
 just routine and safe?



 Should I attach the whole valgrind output here or send it to petsc-maint?
 I just repeast these two a couple of times!:


 None of this output about lost memory matters. Send the entire output to
 petsc-maint

Matt


 ==4508== 320 (288 direct, 32 indirect) bytes in 1 blocks are definitely
 lost in loss record 2,644 of 2,665
 ==4508==at 0x4C2815C: malloc (vg_replace_malloc.c:236)
 ==4508==by 0x86417ED: ???
 ==4508==by 0x5D4D099: orte_rml_base_comm_start (in
 /usr/lib/openmpi/lib/libopen-rte.so.0.0.0)
 ==4508==by 0x8640AD1: ???
 ==4508==by 0x5D3AFE6: orte_ess_base_app_setup (in
 /usr/lib/openmpi/lib/libopen-rte.so.0.0.0)
 ==4508==by 0x8846E41: ???
 ==4508==by 0x5D23A52: orte_init (in
 /usr/lib/openmpi/lib/libopen-rte.so.0.0.0)
 ==4508==by 0x5A9E806: ??? (in /usr/lib/openmpi/lib/libmpi.so.0.0.1)
 ==4508==by 0x5ABFD7F: PMPI_Init (in
 /usr/lib/openmpi/lib/libmpi.so.0.0.1)
 ==4508==by 0x530A90: PetscInitialize(int*, char***, char const*, char
 const*) (pinit.c:668)
 ==4508==by 0x4A4955: PetscSession::PetscSession(int*, char***, char
 const*, char const*) (utilities.h:17)
 ==4508==by 0x4A1DA5: main (main_Test2.cpp:49)

 ==4508== 74 bytes in 1 blocks are definitely lost in loss record 2,411 of
 2,665
 ==4508==at 0x4C2815C: malloc (vg_replace_malloc.c:236)
 ==4508==by 0x6F2DDA1: strdup (strdup.c:43)
 ==4508==by 0x5F85117: ??? (in
 /usr/lib/openmpi/lib/libopen-pal.so.0.0.0)
 ==4508==by 0x5F85359: mca_base_param_lookup_string (in
 /usr/lib/openmpi/lib/libopen-pal.so.0.0.0)
 ==4508==by 0xB301869: ???
 ==4508==by 0xB2F5126: ???
 ==4508==by 0x5F82E17: mca_base_components_open (in
 /usr/lib/openmpi/lib/libopen-pal.so.0.0.0)
 ==4508==by 0x5ADA6BA: mca_btl_base_open (in
 /usr/lib/openmpi/lib/libmpi.so.0.0.1)
 ==4508==by 0xA6A9B93: ???
 ==4508==by 0x5F82E17: mca_base_components_open (in
 /usr/lib/openmpi/lib/libopen-pal.so.0.0.0)
 ==4508==by 0x5AE3C88: mca_pml_base_open (in
 /usr/lib/openmpi/lib/libmpi.so.0.0.1)
 ==4508==by 0x5A9E9E0: ??? (in /usr/lib/openmpi/lib/libmpi.so.0.0.1)


 but eventually I get:

 ==4508== LEAK SUMMARY:
 ==4508==definitely lost: 5,949 bytes in 55 blocks
 ==4508==indirectly lost: 3,562 bytes in 32 blocks
 ==4508==  possibly lost: 0 bytes in 0 blocks
 ==4508==still reachable: 181,516 bytes in 2,660 blocks
 ==4508== suppressed: 0 bytes in 0 blocks
 ==4508== Reachable blocks (those to which a pointer was found) are not
 shown.
 ==4508== To see them, rerun with: --leak-check=full --show-reachable=y

 which seems considerable!


  How can we say anything without the valgrind output

[petsc-users] seg fault with VecGetArray

2012-05-12 Thread Mohammad Mirzadeh
Hi guys,

I'm having a really weird issue here! My code seg faults for certain
problem size and after using gdb I have been able to pinpoint the problem
to a VecGetArray call. Here's a series of things I have tried so far

1) -on_error_attach_debugger - unsuccessful; does not launch debugger
2) -start_in_debugger --- unsuccessful; does not start debugger
3) attaching debugger myself - code runs in debugger and seg faults
when calling VecGetArray
4) using ierr=VecGetArray;CHKERRQ(ierr) -- PETSc does not produce
error messages; the code simply seg faults and terminates
5) checking the values of ierr inside the debugger - They are all
0 up untill the code terminates; I think this means petsc does not generate
error?
6) checking for memory leak with valgrind --- All I get are leaks
from OpenMPI and PetscInitialize and PetscFinalize; I think these are
just routine and safe?

What else can I try to find the problem? Any recommendation is really
appreciated!

Thanks,
Mohammad
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120512/f245629f/attachment.htm


[petsc-users] seg fault with VecGetArray

2012-05-12 Thread Mohammad Mirzadeh
Also, if it matters, this is in serial. petsc version is petsc-dev and is
compiled with g++ and OpenMPI. Also, OpenMPI is NOT installed by PETSc
configure.

On Sat, May 12, 2012 at 1:33 AM, Mohammad Mirzadeh mirzadeh at gmail.comwrote:

 Hi guys,

 I'm having a really weird issue here! My code seg faults for certain
 problem size and after using gdb I have been able to pinpoint the problem
 to a VecGetArray call. Here's a series of things I have tried so far

 1) -on_error_attach_debugger - unsuccessful; does not launch debugger
 2) -start_in_debugger --- unsuccessful; does not start debugger
 3) attaching debugger myself - code runs in debugger and seg faults
 when calling VecGetArray
 4) using ierr=VecGetArray;CHKERRQ(ierr) -- PETSc does not produce
 error messages; the code simply seg faults and terminates
 5) checking the values of ierr inside the debugger - They are all
 0 up untill the code terminates; I think this means petsc does not generate
 error?
 6) checking for memory leak with valgrind --- All I get are leaks
 from OpenMPI and PetscInitialize and PetscFinalize; I think these are
 just routine and safe?

 What else can I try to find the problem? Any recommendation is really
 appreciated!

 Thanks,
 Mohammad

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120512/a22744fa/attachment.htm


[petsc-users] error in petsc-dev

2012-04-30 Thread Mohammad Mirzadeh
Hi,

Just a quick question. Following
petsc-dev/src/ksp/ksp/examples/tutorials/ex2.c, it seems that I need to
call both MatMPIAIJSetPreallocation and MatSeqAIJSetPreallocation to be
able to preallocate for both MPI and Seq matrices. Does petsc automatically
chose the relevant function when the code is run in serial and parallel? In
other words, what is the effect
of MatMPIAIJSetPreallocation(MatSeqAIJSetPreallocation) when the code is
run in serial(parallel)?

I like how several functions are abstract and can be used both in serial
and parallel (like MatCreate). Is there a similar way to just call a single
MatSetPreallocation function?

Thanks,
Mohammad

On Wed, Apr 25, 2012 at 4:04 PM, Mohammad Mirzadeh mirzadeh at gmail.comwrote:

 Thanks Hong; that fixed the problem.


 On Wed, Apr 25, 2012 at 11:31 AM, Hong Zhang hzhang at mcs.anl.gov wrote:

 Mohammad:


 MatCreate(comm, A);
 MatSetSizes(A, localRowSize, localColumnSize, globalRowSize,
 globalColumnSize);
 MatSetType(A, MATMPIAIJ);
 MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz);
 MatSetFromOptions(A);
 MatGetOwnershipRange(A, rStart, rEnd);


 This (even without MatSetType(A, MATMPIAIJ);) works with 3.2-p6 but not
 dev. The only difference I can see is 1) the order of MatSetFromOptions and
 2) I do not call MatSeqAIJSetPreallocation which I think I do not need
 anyway. Is there something I'm doing wrong?


 MatSetFromOptions() must be called before MatMPIAIJSetPreallocation().
 If user set mattype at runtime, MatSetFromOptions() picks it and set the
 type accordingly. SetPreallocation()
 will be called after the type is set.

 Hong


 Mohammd




-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120430/f09c61f5/attachment-0001.htm


[petsc-users] error in petsc-dev

2012-04-30 Thread Mohammad Mirzadeh
Barry, Matt,

Thank you both.

Mohammad

On Mon, Apr 30, 2012 at 6:05 PM, Barry Smith bsmith at mcs.anl.gov wrote:


 On Apr 30, 2012, at 6:57 PM, Matthew Knepley wrote:

  On Mon, Apr 30, 2012 at 7:01 PM, Mohammad Mirzadeh mirzadeh at gmail.com
 wrote:
  Hi,
 
  Just a quick question. Following
 petsc-dev/src/ksp/ksp/examples/tutorials/ex2.c, it seems that I need to
 call both MatMPIAIJSetPreallocation and MatSeqAIJSetPreallocation to be
 able to preallocate for both MPI and Seq matrices. Does petsc automatically
 chose the relevant function when the code is run in serial and parallel?

 Yes, it uses the relevant one and ignores any not relevant.

This is a common trick in PETSc. You can think of the calls as methods
 specific to a particular subclass of the Mat class. PETSc automatically
 uses all the methods that are appropriate for the particular subclass and
 ignores all the other ones.

Barry



  In other words, what is the effect of
 MatMPIAIJSetPreallocation(MatSeqAIJSetPreallocation) when the code is run
 in serial(parallel)?
 
  I like how several functions are abstract and can be used both in serial
 and parallel (like MatCreate). Is there a similar way to just call a single
 MatSetPreallocation function?
 
 
 http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Mat/MatXAIJSetPreallocation.html
 
 Matt
 
  Thanks,
  Mohammad
 
  On Wed, Apr 25, 2012 at 4:04 PM, Mohammad Mirzadeh mirzadeh at gmail.com
 wrote:
  Thanks Hong; that fixed the problem.
 
 
  On Wed, Apr 25, 2012 at 11:31 AM, Hong Zhang hzhang at mcs.anl.gov wrote:
  Mohammad:
 
  MatCreate(comm, A);
  MatSetSizes(A, localRowSize, localColumnSize, globalRowSize,
 globalColumnSize);
  MatSetType(A, MATMPIAIJ);
  MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz);
  MatSetFromOptions(A);
  MatGetOwnershipRange(A, rStart, rEnd);
 
 
  This (even without MatSetType(A, MATMPIAIJ);) works with 3.2-p6 but not
 dev. The only difference I can see is 1) the order of MatSetFromOptions and
 2) I do not call MatSeqAIJSetPreallocation which I think I do not need
 anyway. Is there something I'm doing wrong?
 
  MatSetFromOptions() must be called before MatMPIAIJSetPreallocation().
  If user set mattype at runtime, MatSetFromOptions() picks it and set the
 type accordingly. SetPreallocation()
  will be called after the type is set.
 
  Hong
 
  Mohammd
 
 
 
 
 
 
  --
  What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
  -- Norbert Wiener


-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120430/d97f0d2a/attachment.htm


[petsc-users] error in petsc-dev

2012-04-25 Thread Mohammad Mirzadeh
Hong,

 Did you follow same procedural call as petsc examples,
e.g., petsc-dev/src/ksp/ksp/examples/tutorials/ex2.c:
   ierr = MatCreate(PETSC_COMM_WORLD,A);CHKERRQ(ierr);
  ierr = MatSetSizes(A,PETSC_DECIDE,PETSC_DECIDE,m*n,m*n);CHKERRQ(ierr);
  ierr = MatSetFromOptions(A);CHKERRQ(ierr);
  ierr =
MatMPIAIJSetPreallocation(A,5,PETSC_NULL,5,PETSC_NULL);CHKERRQ(ierr);
  ierr = MatSeqAIJSetPreallocation(A,5,PETSC_NULL);CHKERRQ(ierr);

Well what I do is *similar*

MatCreate(comm, A);
MatSetSizes(A, localRowSize, localColumnSize, globalRowSize,
globalColumnSize);
MatSetType(A, MATMPIAIJ);
MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz);
MatSetFromOptions(A);
MatGetOwnershipRange(A, rStart, rEnd);


This (even without MatSetType(A, MATMPIAIJ);) works with 3.2-p6 but not
dev. The only difference I can see is 1) the order of MatSetFromOptions and
2) I do not call MatSeqAIJSetPreallocation which I think I do not need
anyway. Is there something I'm doing wrong?

Mohammd
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120425/0ccae1d1/attachment-0001.htm


[petsc-users] error in petsc-dev

2012-04-25 Thread Mohammad Mirzadeh
Thanks Hong; that fixed the problem.

On Wed, Apr 25, 2012 at 11:31 AM, Hong Zhang hzhang at mcs.anl.gov wrote:

 Mohammad:


 MatCreate(comm, A);
 MatSetSizes(A, localRowSize, localColumnSize, globalRowSize,
 globalColumnSize);
 MatSetType(A, MATMPIAIJ);
 MatMPIAIJSetPreallocation(A, 0, d_nnz, 0, o_nnz);
 MatSetFromOptions(A);
 MatGetOwnershipRange(A, rStart, rEnd);


 This (even without MatSetType(A, MATMPIAIJ);) works with 3.2-p6 but not
 dev. The only difference I can see is 1) the order of MatSetFromOptions and
 2) I do not call MatSeqAIJSetPreallocation which I think I do not need
 anyway. Is there something I'm doing wrong?


 MatSetFromOptions() must be called before MatMPIAIJSetPreallocation().
 If user set mattype at runtime, MatSetFromOptions() picks it and set the
 type accordingly. SetPreallocation()
 will be called after the type is set.

 Hong


 Mohammd



-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120425/d8747353/attachment.htm


[petsc-users] error in petsc-dev

2012-04-24 Thread Mohammad Mirzadeh
Hi,

While trying to figure out a problem, I came across the following
situation. Consider the following code:

int main (int argc, char **argv){

PetscInitialize(argc, argv, (char*)0, help);


Mat m;

MatCreate(PETSC_COMM_WORLD, m);

MatSetSizes(m, 10, 10, PETSC_DECIDE, PETSC_DECIDE);

MatSetFromOptions(m);

MatAssemblyBegin(m, MAT_FINAL_ASSEMBLY);

MatAssemblyEnd(m, MAT_FINAL_ASSEMBLY);

MatView(m, PETSC_VIEWER_STDOUT_WORLD);

MatDestroy(m);


PetscFinalize();

return 0;

}


This runs without any problem under 3.2-p6 but fails with petsc-dev:

[0]PETSC ERROR: - Error Message

[0]PETSC ERROR: Object is in wrong state!
[0]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on
argument 1 mat before MatAssemblyBegin()!
[0]PETSC ERROR:

[0]PETSC ERROR: Petsc Development HG revision:
5c943f0d8cbf252873fd4abeffa38c8d3c15987e  HG Date: Mon Apr 09 22:04:11 2012
-0500
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR:

[0]PETSC ERROR: ./petsc on a arch-linu named mohammad-laptop by mohammad
Tue Apr 24 15:55:40 2012
[0]PETSC ERROR: Libraries linked from
/home/mohammad/soft/petsc-dev/arch-linux2-cxx-debug/lib
[0]PETSC ERROR: Configure run at Mon Apr  9 23:17:27 2012
[0]PETSC ERROR: Configure options --with-clanguage=cxx --with-cc=gcc
--with-cxx=g++ --with-fc=gfortran --download-mpich=1
--download-f-blas-lapack=1 --download-hypre=1 --download-superlu_dist=1
--download-metis=1 --download-parmetis=1
[0]PETSC ERROR:

[0]PETSC ERROR: MatAssemblyBegin() line 4810 in
/home/mohammad/soft/petsc-dev/src/mat/interface/matrix.c
[0]PETSC ERROR:

[0]PETSC ERROR: Caught signal number 11 SEGV: Segmentation Violation,
probably memory access out of range
[0]PETSC ERROR: Try option -start_in_debugger or -on_error_attach_debugger
[0]PETSC ERROR: or see
http://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind[0]PETSC ERROR:
or try http://valgrind.org on GNU/linux and Apple Mac OS X to find memory
corruption errors
[0]PETSC ERROR: likely location of problem given in stack below
[0]PETSC ERROR: -  Stack Frames

[0]PETSC ERROR: Note: The EXACT line numbers in the stack are not available,
[0]PETSC ERROR:   INSTEAD the line number of the start of the function
[0]PETSC ERROR:   is given.
[0]PETSC ERROR: [0] MatAssemblyEnd_SeqAIJ line 800
/home/mohammad/soft/petsc-dev/src/mat/impls/aij/seq/aij.c
[0]PETSC ERROR: [0] MatAssemblyEnd line 4984
/home/mohammad/soft/petsc-dev/src/mat/interface/matrix.c
[0]PETSC ERROR: [0] MatAssemblyBegin line 4807
/home/mohammad/soft/petsc-dev/src/mat/interface/matrix.c
[0]PETSC ERROR: - Error Message

[0]PETSC ERROR: Signal received!
[0]PETSC ERROR:

[0]PETSC ERROR: Petsc Development HG revision:
5c943f0d8cbf252873fd4abeffa38c8d3c15987e  HG Date: Mon Apr 09 22:04:11 2012
-0500
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR:

[0]PETSC ERROR: ./petsc on a arch-linu named mohammad-laptop by mohammad
Tue Apr 24 15:55:40 2012
[0]PETSC ERROR: Libraries linked from
/home/mohammad/soft/petsc-dev/arch-linux2-cxx-debug/lib
[0]PETSC ERROR: Configure run at Mon Apr  9 23:17:27 2012
[0]PETSC ERROR: Configure options --with-clanguage=cxx --with-cc=gcc
--with-cxx=g++ --with-fc=gfortran --download-mpich=1
--download-f-blas-lapack=1 --download-hypre=1 --download-superlu_dist=1
--download-metis=1 --download-parmetis=1
[0]PETSC ERROR:

[0]PETSC ERROR: User provided function() line 0 in unknown directory
unknown file
application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
[unset]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0


Eventually I could fix this by adding MatSetUp(m) after setting the
options. Why do I need this in petsc-dev? Does this somehow preallocate the
matrix?

Thanks,
Mohammad
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120424/4e5a494a/attachment.htm


[petsc-users] error in petsc-dev

2012-04-24 Thread Mohammad Mirzadeh
Thanks Hong.

On Tue, Apr 24, 2012 at 4:25 PM, Hong Zhang hzhang at mcs.anl.gov wrote:

 See
 http://www.mcs.anl.gov/petsc/documentation/changes/dev.html:

 You MUST now call MatXXXSetPreallocation() or MatSetUp() on any matrix you
 create directly (not using DMCreateMatrix()) before calling MatSetValues(),
 MatSetValuesBlocked() etc.


 On Tue, Apr 24, 2012 at 6:01 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Hi,

 While trying to figure out a problem, I came across the following
 situation. Consider the following code:

 int main (int argc, char **argv){

 PetscInitialize(argc, argv, (char*)0, help);


 Mat m;

 MatCreate(PETSC_COMM_WORLD, m);

 MatSetSizes(m, 10, 10, PETSC_DECIDE, PETSC_DECIDE);

 MatSetFromOptions(m);

 MatAssemblyBegin(m, MAT_FINAL_ASSEMBLY);

 MatAssemblyEnd(m, MAT_FINAL_ASSEMBLY);

 MatView(m, PETSC_VIEWER_STDOUT_WORLD);

 MatDestroy(m);


 PetscFinalize();

 return 0;

 }


 This runs without any problem under 3.2-p6 but fails with petsc-dev:

 [0]PETSC ERROR: - Error Message
 
 [0]PETSC ERROR: Object is in wrong state!
 [0]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on
 argument 1 mat before MatAssemblyBegin()!
 [0]PETSC ERROR:
 

 Eventually I could fix this by adding MatSetUp(m) after setting the
 options. Why do I need this in petsc-dev? Does this somehow preallocate the
 matrix?

 Yes.

 Hong


 Thanks,
 Mohammad



-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120424/9a1919e2/attachment.htm


[petsc-users] error in petsc-dev

2012-04-24 Thread Mohammad Mirzadeh
Hong,

Is this also true with MatSetType()? petsc-dev seems to need it (3.2-p6
does not) but it's not listed in the changes link ...

On Tue, Apr 24, 2012 at 4:27 PM, Mohammad Mirzadeh mirzadeh at gmail.comwrote:

 Thanks Hong.


 On Tue, Apr 24, 2012 at 4:25 PM, Hong Zhang hzhang at mcs.anl.gov wrote:

 See
 http://www.mcs.anl.gov/petsc/documentation/changes/dev.html:

 You MUST now call MatXXXSetPreallocation() or MatSetUp() on any matrix
 you create directly (not using DMCreateMatrix()) before calling
 MatSetValues(), MatSetValuesBlocked() etc.


 On Tue, Apr 24, 2012 at 6:01 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Hi,

 While trying to figure out a problem, I came across the following
 situation. Consider the following code:

 int main (int argc, char **argv){

 PetscInitialize(argc, argv, (char*)0, help);


 Mat m;

 MatCreate(PETSC_COMM_WORLD, m);

 MatSetSizes(m, 10, 10, PETSC_DECIDE, PETSC_DECIDE);

 MatSetFromOptions(m);

 MatAssemblyBegin(m, MAT_FINAL_ASSEMBLY);

 MatAssemblyEnd(m, MAT_FINAL_ASSEMBLY);

 MatView(m, PETSC_VIEWER_STDOUT_WORLD);

 MatDestroy(m);


 PetscFinalize();

 return 0;

 }


 This runs without any problem under 3.2-p6 but fails with petsc-dev:

 [0]PETSC ERROR: - Error Message
 
 [0]PETSC ERROR: Object is in wrong state!
 [0]PETSC ERROR: Must call MatXXXSetPreallocation() or MatSetUp() on
 argument 1 mat before MatAssemblyBegin()!
 [0]PETSC ERROR:
 

 Eventually I could fix this by adding MatSetUp(m) after setting the
 options. Why do I need this in petsc-dev? Does this somehow preallocate the
 matrix?

 Yes.

 Hong


 Thanks,
 Mohammad




-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120424/72daf2ac/attachment.htm


[petsc-users] error in petsc-dev

2012-04-24 Thread Mohammad Mirzadeh
 the line number of the start of the function
[0]PETSC ERROR:   is given.
[0]PETSC ERROR: [0] MatAssemblyEnd_SeqAIJ line 800
/home/mohammad/soft/petsc-dev/src/mat/impls/aij/seq/aij.c
[0]PETSC ERROR: [0] MatAssemblyEnd line 4984
/home/mohammad/soft/petsc-dev/src/mat/interface/matrix.c
[0]PETSC ERROR: [0] MatAssemblyBegin line 4807
/home/mohammad/soft/petsc-dev/src/mat/interface/matrix.c
[0]PETSC ERROR: [0] MatGetOwnershipRange line 6101
/home/mohammad/soft/petsc-dev/src/mat/interface/matrix.c
[0]PETSC ERROR: [0] MatMPIAIJSetPreallocation line 3881
/home/mohammad/soft/petsc-dev/src/mat/impls/aij/mpi/mpiaij.c
[0]PETSC ERROR: - Error Message

[0]PETSC ERROR: Signal received!
[0]PETSC ERROR:

[0]PETSC ERROR: Petsc Development HG revision:
5c943f0d8cbf252873fd4abeffa38c8d3c15987e  HG Date: Mon Apr 09 22:04:11 2012
-0500
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR:

[0]PETSC ERROR: ./petsc on a arch-linu named mohammad-laptop by mohammad
Tue Apr 24 22:49:38 2012
[0]PETSC ERROR: Libraries linked from
/home/mohammad/soft/petsc-dev/arch-linux2-cxx-debug/lib
[0]PETSC ERROR: Configure run at Mon Apr  9 23:17:27 2012
[0]PETSC ERROR: Configure options --with-clanguage=cxx --with-cc=gcc
--with-cxx=g++ --with-fc=gfortran --download-mpich=1
--download-f-blas-lapack=1 --download-hypre=1 --download-superlu_dist=1
--download-metis=1 --download-parmetis=1
[0]PETSC ERROR:

[0]PETSC ERROR: User provided function() line 0 in unknown directory
unknown file
application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0
[unset]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 59) - process 0

Even more than that, I get lots of errors apparently complaining that I
have not preallocated the matrix:

[0]PETSC ERROR: - Error Message

[0]PETSC ERROR: Argument out of range!
[0]PETSC ERROR: New nonzero at (6,209) caused a malloc!
[0]PETSC ERROR:

[0]PETSC ERROR: Petsc Development HG revision:
5c943f0d8cbf252873fd4abeffa38c8d3c15987e  HG Date: Mon Apr 09 22:04:11 2012
-0500
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR:

[0]PETSC ERROR: ./petsc on a arch-linu named mohammad-laptop by mohammad
Tue Apr 24 22:44:09 2012
[0]PETSC ERROR: Libraries linked from
/home/mohammad/soft/petsc-dev/arch-linux2-cxx-debug/lib
[0]PETSC ERROR: Configure run at Mon Apr  9 23:17:27 2012
[0]PETSC ERROR: Configure options --with-clanguage=cxx --with-cc=gcc
--with-cxx=g++ --with-fc=gfortran --download-mpich=1
--download-f-blas-lapack=1 --download-hypre=1 --download-superlu_dist=1
--download-metis=1 --download-parmetis=1
[0]PETSC ERROR:

[0]PETSC ERROR: MatSetValues_MPIAIJ() line 507 in
/home/mohammad/soft/petsc-dev/src/mat/impls/aij/mpi/mpiaij.c
[0]PETSC ERROR: MatSetValues() line 1148 in
/home/mohammad/soft/petsc-dev/src/mat/interface/matrix.c


I do not see these with 3.2-p6. Does 3.2-p6 has any mechanism for such
alerts?

On Tue, Apr 24, 2012 at 5:58 PM, Hong Zhang hzhang at mcs.anl.gov wrote:

 Mohammad :


 Is this also true with MatSetType()? petsc-dev seems to need it (3.2-p6
 does not) but it's not listed in the changes link ...

 There is no change on  MatSetType(). The default type is aij.
 You can check petsc examples,
 e.g. petsc-dev/src/ksp/ksp/examples/tutorials/ex2.c

 Hong



 On Tue, Apr 24, 2012 at 4:27 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Thanks Hong.


 On Tue, Apr 24, 2012 at 4:25 PM, Hong Zhang hzhang at mcs.anl.gov wrote:

 See
 http://www.mcs.anl.gov/petsc/documentation/changes/dev.html:

 You MUST now call MatXXXSetPreallocation() or MatSetUp() on any matrix
 you create directly (not using DMCreateMatrix()) before calling
 MatSetValues(), MatSetValuesBlocked() etc.


 On Tue, Apr 24, 2012 at 6:01 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Hi,

 While trying to figure out a problem, I came across the following
 situation. Consider the following code:

 int main (int argc, char **argv){

 PetscInitialize(argc, argv, (char*)0, help);


 Mat m;

 MatCreate(PETSC_COMM_WORLD, m);

 MatSetSizes(m, 10, 10, PETSC_DECIDE, PETSC_DECIDE);

 MatSetFromOptions(m);

 MatAssemblyBegin(m, MAT_FINAL_ASSEMBLY);

 MatAssemblyEnd(m, MAT_FINAL_ASSEMBLY

[petsc-users] Advice on OpenMP/PETSc mix

2012-04-20 Thread Mohammad Mirzadeh
Hi guys,

I have seen multiple emails regarding this in the mailing list and I'm
afraid you might have already answered this question but I'm not quite sure!

I have objects in my code that are hard(er) to parallelize using MPI and so
far my strategy has been to just handle them in serial such that each
process has a copy of the whole thing. This object is related to my grid
generation/information  etc so it only needs to be done once at
the beginning (no moving mesh for NOW). As a result I do not care much
about the speed since its nothing compared to the overall solution time.
However, I do care about the memory that this object consumes and can limit
my problem size.

So I had the following idea the other day. Is it possible/good idea to
paralleize the grid generation using OpenMP so that each node (as opposed
to core) would share the data structure? This can save me a lot since
memory on nodes are shared among cores (e.g. 32 GB/node vs 2GB/core on
Ranger). What I'm not quite sure about is how the job is scheduled when
running the code via mpirun -n Np. Should Np be the total number of cores
or nodes?

If I use, say Np = 16 processes on one node, MPI is running 16 versions of
the code on a single node (which has 16 cores). How does OpenMP figure out
how to fork? Does it fork a total of 16 threads/MPI process = 256 threads
or is it smart to just fork a total of 16 threads/node = 1 thread/core = 16
threads? I'm a bit confused here how the job is scheduled when MPI and
OpenMP are mixed?

Do I make any sense at all?!

Thanks
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120420/cd466a9c/attachment.htm


[petsc-users] Advice on OpenMP/PETSc mix

2012-04-20 Thread Mohammad Mirzadeh
Thanks Aron. That would work but at the cost of wasting idle cores when
threads join and the rest of MPI-based code is running, correct?

On Fri, Apr 20, 2012 at 11:44 AM, Aron Ahmadia aron.ahmadia at 
kaust.edu.sawrote:

 If I use, say Np = 16 processes on one node, MPI is running 16 versions of
 the code on a single node (which has 16 cores). How does OpenMP figure out
 how to fork? Does it fork a total of 16 threads/MPI process = 256 threads
 or is it smart to just fork a total of 16 threads/node = 1 thread/core = 16
 threads? I'm a bit confused here how the job is scheduled when MPI and
 OpenMP are mixed?


 This is one important use for OMP_NUM_THREADS.   If you're trying to
 increase the amount of memory per process, you should map one process per
 node and set OMP_NUM_THREADS to the number of OpenMP threads you'd like.
  There are lots of tutorials and even textbooks now that discuss hybrid
 programming techniques that you should look to for more information (or you
 could try scicomp.stackexchange.com).

 Cheers,
 Aron

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120420/1059d81f/attachment.htm


[petsc-users] Advice on OpenMP/PETSc mix

2012-04-20 Thread Mohammad Mirzadeh
 PETSc can use threads or the HMPI approach. We are in the process of
unifying the code for all the threading models, at which point PETSc will
be able to use your OpenMP threads.

Jed, does this mean (in future) PETSc can automatically recognize my OpenMP
threads  at the core level and use them for parallelization  even if the
binary is run as 1 MPI process/node?

On Fri, Apr 20, 2012 at 12:42 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

  PETSc can use threads or the HMPI approach. We are in the process of
 unifying the code for all the threading models, at which point PETSc will
 be able to use your OpenMP threads.

  On Apr 20, 2012 1:50 PM, Mohammad Mirzadeh mirzadeh at gmail.com wrote:
 
  Thanks Aron. That would work but at the cost of wasting idle cores when
 threads join and the rest of MPI-based code is running, correct?
 
  On Fri, Apr 20, 2012 at 11:44 AM, Aron Ahmadia 
 aron.ahmadia at kaust.edu.sa wrote:
 
  If I use, say Np = 16 processes on one node, MPI is running 16
 versions of the code on a single node (which has 16 cores). How does OpenMP
 figure out how to fork? Does it fork a total of 16 threads/MPI process =
 256 threads or is it smart to just fork a total of 16 threads/node = 1
 thread/core = 16 threads? I'm a bit confused here how the job is scheduled
 when MPI and OpenMP are mixed?
 
 
  This is one important use for OMP_NUM_THREADS.   If you're trying to
 increase the amount of memory per process, you should map one process per
 node and set OMP_NUM_THREADS to the number of OpenMP threads you'd like.
  There are lots of tutorials and even textbooks now that discuss hybrid
 programming techniques that you should look to for more information (or you
 could try scicomp.stackexchange.com).
 
  Cheers,
  Aron
 
 

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120420/611f8d39/attachment.htm


[petsc-users] Advice on OpenMP/PETSc mix

2012-04-20 Thread Mohammad Mirzadeh
Barry,

That's quite smart and I like it. Aside from the disadvantage that you
mentioned (which to be honest is not quite straight forward but doable) I
have the following question.

When I do computations on, say (rank%16 == 0) processes, do they have
access to the whole 32 GB memory or are they still bounded by the 2GB/core?
... Or in a more broad sense, am I mistaken to assume that each core only
has access to 2GB?

Thanks for your support.

On Fri, Apr 20, 2012 at 2:25 PM, Barry Smith bsmith at mcs.anl.gov wrote:


   Mohammad,

 Short term for what you can do NOW.

 PETSc wants to have one MPI process for core; so start up the program
 that way, (say there are 16 cores per node.  In the block of code that does
 your non-MPI stuff do

 if ((rank % 16) == 0) {  /* this code is only running on one MPI
 process per node */
  build your mesh grid data structure and process it, use MPI
 pramas whatever to parallelize that computation, have a big data structure
  */
 }
 have the (rank % 16) == 0  MPI processes send the grid information to
 each (rank % 16) == j MPI process the part of the grid information they
 need.
 have the (rank % 16) == 0  MPI processes delete the global big data
 structure it built.

 The rest of the program runs as a regular MPI PETSc program

 The advantage of this approach is 1) it will run today 2) it doesn't
 depend on any fragile os features or software. The disadvantage is that you
 need to figure out what part of the grid data each process needs and ship
 it from the (rank % 16) == 0  MPI processes.


Barry





 On Apr 20, 2012, at 1:31 PM, Mohammad Mirzadeh wrote:

  Hi guys,
 
  I have seen multiple emails regarding this in the mailing list and I'm
 afraid you might have already answered this question but I'm not quite sure!
 
  I have objects in my code that are hard(er) to parallelize using MPI and
 so far my strategy has been to just handle them in serial such that each
 process has a copy of the whole thing. This object is related to my grid
 generation/information  etc so it only needs to be done once at the
 beginning (no moving mesh for NOW). As a result I do not care much about
 the speed since its nothing compared to the overall solution time. However,
 I do care about the memory that this object consumes and can limit my
 problem size.
 
  So I had the following idea the other day. Is it possible/good idea to
 paralleize the grid generation using OpenMP so that each node (as opposed
 to core) would share the data structure? This can save me a lot since
 memory on nodes are shared among cores (e.g. 32 GB/node vs 2GB/core on
 Ranger). What I'm not quite sure about is how the job is scheduled when
 running the code via mpirun -n Np. Should Np be the total number of cores
 or nodes?
 
  If I use, say Np = 16 processes on one node, MPI is running 16 versions
 of the code on a single node (which has 16 cores). How does OpenMP figure
 out how to fork? Does it fork a total of 16 threads/MPI process = 256
 threads or is it smart to just fork a total of 16 threads/node = 1
 thread/core = 16 threads? I'm a bit confused here how the job is scheduled
 when MPI and OpenMP are mixed?
 
  Do I make any sense at all?!
 
  Thanks


-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120420/21d609b4/attachment.htm


[petsc-users] Advice on OpenMP/PETSc mix

2012-04-20 Thread Mohammad Mirzadeh
Thanks Jed; I did not know I can access the whole memory.

Mohammad

On Fri, Apr 20, 2012 at 3:05 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 You have access to all the memory, though it may help do distribute it
 carefully. For example, by interleaving across NUMA regions to avoid
 filling up a single memory bus with the shared data structure that would
 later displace the smaller independent data structures that will be
 accessed in parallel. You can use libnuma on Linux for this, other
 operating systems do not currently provide decent ways to do this.

 Write the code without worrying too much about this first.
 On Apr 20, 2012 4:39 PM, Mohammad Mirzadeh mirzadeh at gmail.com wrote:

 Barry,

 That's quite smart and I like it. Aside from the disadvantage that you
 mentioned (which to be honest is not quite straight forward but doable) I
 have the following question.

 When I do computations on, say (rank%16 == 0) processes, do they have
 access to the whole 32 GB memory or are they still bounded by the 2GB/core?
 ... Or in a more broad sense, am I mistaken to assume that each core only
 has access to 2GB?

 Thanks for your support.

 On Fri, Apr 20, 2012 at 2:25 PM, Barry Smith bsmith at mcs.anl.gov wrote:


   Mohammad,

 Short term for what you can do NOW.

 PETSc wants to have one MPI process for core; so start up the
 program that way, (say there are 16 cores per node.  In the block of code
 that does your non-MPI stuff do

 if ((rank % 16) == 0) {  /* this code is only running on one MPI
 process per node */
  build your mesh grid data structure and process it, use MPI
 pramas whatever to parallelize that computation, have a big data structure
  */
 }
 have the (rank % 16) == 0  MPI processes send the grid information
 to each (rank % 16) == j MPI process the part of the grid information they
 need.
 have the (rank % 16) == 0  MPI processes delete the global big data
 structure it built.

 The rest of the program runs as a regular MPI PETSc program

 The advantage of this approach is 1) it will run today 2) it doesn't
 depend on any fragile os features or software. The disadvantage is that you
 need to figure out what part of the grid data each process needs and ship
 it from the (rank % 16) == 0  MPI processes.


Barry





 On Apr 20, 2012, at 1:31 PM, Mohammad Mirzadeh wrote:

  Hi guys,
 
  I have seen multiple emails regarding this in the mailing list and I'm
 afraid you might have already answered this question but I'm not quite sure!
 
  I have objects in my code that are hard(er) to parallelize using MPI
 and so far my strategy has been to just handle them in serial such that
 each process has a copy of the whole thing. This object is related to my
 grid generation/information  etc so it only needs to be done once at the
 beginning (no moving mesh for NOW). As a result I do not care much about
 the speed since its nothing compared to the overall solution time. However,
 I do care about the memory that this object consumes and can limit my
 problem size.
 
  So I had the following idea the other day. Is it possible/good idea to
 paralleize the grid generation using OpenMP so that each node (as opposed
 to core) would share the data structure? This can save me a lot since
 memory on nodes are shared among cores (e.g. 32 GB/node vs 2GB/core on
 Ranger). What I'm not quite sure about is how the job is scheduled when
 running the code via mpirun -n Np. Should Np be the total number of cores
 or nodes?
 
  If I use, say Np = 16 processes on one node, MPI is running 16
 versions of the code on a single node (which has 16 cores). How does OpenMP
 figure out how to fork? Does it fork a total of 16 threads/MPI process =
 256 threads or is it smart to just fork a total of 16 threads/node = 1
 thread/core = 16 threads? I'm a bit confused here how the job is scheduled
 when MPI and OpenMP are mixed?
 
  Do I make any sense at all?!
 
  Thanks



-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120420/a4947f9a/attachment-0001.htm


[petsc-users] Petsc CMake conf

2012-04-13 Thread Mohammad Mirzadeh
Hi,

Are there any cmake conf file that I can include in my CMakeLists.txt that
would automatically detect include/lib directory for my petsc installation?
Right now I'm doing this manually and was wondering if its possibel to
automate it.

Thanks,
Mohammad
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120413/afe50c61/attachment.htm


[petsc-users] Petsc CMake conf

2012-04-13 Thread Mohammad Mirzadeh
Sweet :D -- Thanks Jed.

Just a question though. When I run cmake in the build directory I get:

-- Performing Test MULTIPASS_TEST_1_petsc_works_minimal
-- Performing Test MULTIPASS_TEST_1_petsc_works_minimal - Failed
-- Performing Test MULTIPASS_TEST_2_petsc_works_allincludes
-- Performing Test MULTIPASS_TEST_2_petsc_works_allincludes - Failed
-- Performing Test MULTIPASS_TEST_3_petsc_works_alllibraries
-- Performing Test MULTIPASS_TEST_3_petsc_works_alllibraries - Failed
-- Performing Test MULTIPASS_TEST_4_petsc_works_all
-- Performing Test MULTIPASS_TEST_4_petsc_works_all - Failed
-- PETSc could not be used, maybe the install is broken.
-- PETSc could not be found.  Be sure to set PETSC_DIR and PETSC_ARCH.
 (missing:  PETSC_EXECUTABLE_RUNS)
-- PETSc could not be found.  Be sure to set PETSC_DIR and PETSC_ARCH.
 (missing:  PETSC_EXECUTABLE_RUNS)


But actually I can make and run the binary without any problem! should I
ignore this? I do set PETSC_DIR and PETSC_ARCH in my CMakeLists.txt.



On Fri, Apr 13, 2012 at 3:19 PM, Jed Brown jedbrown at mcs.anl.gov wrote:



 On Fri, Apr 13, 2012 at 17:17, Mohammad Mirzadeh mirzadeh at gmail.comwrote:

 Hi,

 Are there any cmake conf file that I can include in my CMakeLists.txt
 that would automatically detect include/lib directory for my petsc
 installation? Right now I'm doing this manually and was wondering if its
 possibel to automate it.


 https://github.com/jedbrown/cmake-modules/blob/master/FindPETSc.cmake


-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120413/c60a9cc5/attachment.htm


[petsc-users] Petsc CMake conf

2012-04-13 Thread Mohammad Mirzadeh
Sure. Will let you know if I found something.

Once again thanks. One of the best things about PETSc is the community  --
You don't see this often in other packages :)

On Fri, Apr 13, 2012 at 4:01 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Fri, Apr 13, 2012 at 17:57, Mohammad Mirzadeh mirzadeh at gmail.comwrote:

 my code -- I'm experimenting with CMake to manage my project and link it
 to petsc. right now, it seems to link to petsc correctly and I can run the
 code just fine. It just produces those messages when generating the
 makefile.


 Feel free to investigate why it was failing and let me know or send a
 patch. It shouldn't be hard to figure out from the logs.

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120413/beb92454/attachment.htm


[petsc-users] Strange error(?) message

2012-04-12 Thread Mohammad Mirzadeh
Thanks Matt. When I use -start_in_debugger, it just executes the program
without any extra message whatsoever! so its like I did not use the flag at
all.

Also, thought to share my conclusions on using ParMetis with others. The
two important things are:

1- Make sure you have a symmetric adjacency matrix for partitioning.
Chaning my non-symmetric matrix into a symmetric one solved the Key 10716
not found! problem.
2- Unlike PETSc, ParMetis does not equal number of nodes/elements to each
processors. The numbers are close, but not the same as you start
partitioning the grid with. It, however, does produce contigious
distribution of nodes/elements. You can use  ISPartitioningCount to figure
out how many nodes/elements are assigned to each processor.

Mohammad


On Wed, Apr 11, 2012 at 5:43 PM, Matthew Knepley knepley at gmail.com wrote:

 On Wed, Apr 11, 2012 at 10:49 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 I see. Well the reason for non-symmetry is that I'm using a finite
 difference discretization on quadtree AMR grids. When discretizing at a
 hanging node, i.e. a node that is missing a direct neighbor, I use
  interpolation from nodes of the parent cell. When it comes to discretizing
 at those nodes (i.e. nodes of the parent), however, they are on a coarser
 grid and will have direct neighbors so they would not see the hanging node
 in their discretization. This leads to a non-symmetric discretization
 matrix which I was using (structurally) to define the adjacency matrix of
 the graph. I think I can fix this by requiring the parent nodes to also
 have a link to the hanging node (although they are not used in the
 discretization anyway)

 I'm just surprised that ParMetis does not complain about non-symmetric
 graphs at lower resolutions and partitions them just fine!


 ParMetis does not burden the user with a whole lot of diagnostic output :)


 As for the debugger, -start_in_debugger does not start the debugger for
 me :(. I'm using mpirun -np 2 ./a.out -start_in_debugger. Is this the
 correct way?


 Send the entire output. It is likely that at the beginning there are lines
 showing a failure to open an X window.

Matt



 On Wed, Apr 11, 2012 at 3:07 PM, Matthew Knepley knepley at gmail.comwrote:

 On Wed, Apr 11, 2012 at 8:24 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 I was reading the FAQ list when I came across the following:

 http://www.mcs.anl.gov/petsc/documentation/faq.html#key

 When calling MatPartitioningApply() you get a message Error! Key 16615
 not found The graph of the matrix you are using is not symmetric. You
 must use symmetric matrices for partitioning.

 Is this a limitation on ParMetis side? I set up the adjacency matrix
 based on the discretization that I will be performing on the grid which is
 non-symmetric; both numerically and structurally. What's the solution here?
 Make an approximate adjacency matrix that sort of looks like
 (structurally) my discretization but is symmetric? What I don't understand
 is my matrix IS non-symmetric when the code runs on coarser grids!


 I don't quite understand how you can have a non-symmetric adjacency
 description. But Metis/ParMetis partitions undirected graphs, which
 by definition have symmetric adjacency matrices. Non-symmetric adjacency
 would seem to imply a directed graph of some sort, or more plainly,
 something is a adjacent to another thing which is not adjacent to it.
 That is a very strange concept.


 Also, I was reading the FAQ hoping I can find something regarding using
 gdb in parallel. I found this:
 http://scicomp.stackexchange.com/a/410/485 but I'm not sure how I
 should be using gdb in parallel. Could you (maybe Matt?) please explain a
 little bit?


 -start_in_debugger spawns a gdb windows for EVERY process and attaches it

 -debbuger_nodes a,b,c spawns gdb windows ONLY for ranks a, b, and c

   Thanks,

  Matt


 Thanks

 On Tue, Apr 10, 2012 at 12:08 AM, Mohammad Mirzadeh mirzadeh at gmail.com
  wrote:

 Just built petsc-dev and it did not help. I'm going to look into the
 code to see if my graph is ill-formed in some sense. Just hope the
 problem is from my side not a real bug in ParMetis!



 On Mon, Apr 9, 2012 at 8:46 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 ok. Thanks Jed. I'll try petsc-dev to see if it fixes the problem.

 Thanks everyone.


 On Mon, Apr 9, 2012 at 8:42 PM, Jed Brown jedbrown at mcs.anl.govwrote:

 On Mon, Apr 9, 2012 at 22:37, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Thanks Sean. I'm using Petsc 3.2-p6 along with ParMetis 4.0.2.
 Since this was not supported with 3.2-p6, and previous versions had 
 bugs, I
 built parmetis myself and used --with-parmetis-include and
 --with-parmetis-lib flags to build petsc.

 Should I switch to petsc-dev?


 Yes, and use --download-metis --download-parmetis because the
 version upstream has some bugs for which the patches have not been 
 applied.







 --
 What most experimenters take for granted

[petsc-users] Strange error(?) message

2012-04-12 Thread Mohammad Mirzadeh
Humm. What I meant by contiguous is in terms of new global numberings. That
is for proc 0: [0, np0), for proc 1: [np0, np1), for proc 2: [np1, np2)
..., for proc p-1: [ np-2, ntotal). I did not mean
(topologically) contiguous partitions. Is this not the case?

On Thu, Apr 12, 2012 at 10:03 AM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Thu, Apr 12, 2012 at 11:56, Mohammad Mirzadeh mirzadeh at gmail.comwrote:

 2- Unlike PETSc, ParMetis does not equal number of nodes/elements to each
 processors. The numbers are close, but not the same as you start
 partitioning the grid with. It, however, does produce contigious
 distribution of nodes/elements.


 Contiguous partitions are not guaranteed.

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120412/95c9595d/attachment.htm


[petsc-users] Strange error(?) message

2012-04-12 Thread Mohammad Mirzadeh
Correct! That's what I thought :)

On Thu, Apr 12, 2012 at 10:14 AM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Thu, Apr 12, 2012 at 12:08, Mohammad Mirzadeh mirzadeh at gmail.comwrote:

 Humm. What I meant by contiguous is in terms of new global numberings.
 That is for proc 0: [0, np0), for proc 1: [np0, np1), for proc 2: [np1,
 np2) ..., for proc p-1: [ np-2, ntotal). I did not mean
 (topologically) contiguous partitions. Is this not the case?


 The global numbering produced by ISPartitioningToNumbering() is
 contiguous, but each part might not be topologically connected.

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120412/dad1ed86/attachment.htm


[petsc-users] Strange error(?) message

2012-04-11 Thread Mohammad Mirzadeh
I was reading the FAQ list when I came across the following:

http://www.mcs.anl.gov/petsc/documentation/faq.html#key

When calling MatPartitioningApply() you get a message Error! Key 16615 not
found The graph of the matrix you are using is not symmetric. You must use
symmetric matrices for partitioning.

Is this a limitation on ParMetis side? I set up the adjacency matrix based
on the discretization that I will be performing on the grid which is
non-symmetric; both numerically and structurally. What's the solution here?
Make an approximate adjacency matrix that sort of looks like
(structurally) my discretization but is symmetric? What I don't understand
is my matrix IS non-symmetric when the code runs on coarser grids!

Also, I was reading the FAQ hoping I can find something regarding using gdb
in parallel. I found this: http://scicomp.stackexchange.com/a/410/485 but
I'm not sure how I should be using gdb in parallel. Could you (maybe Matt?)
please explain a little bit?

Thanks

On Tue, Apr 10, 2012 at 12:08 AM, Mohammad Mirzadeh mirzadeh at 
gmail.comwrote:

 Just built petsc-dev and it did not help. I'm going to look into the code
 to see if my graph is ill-formed in some sense. Just hope the problem is
 from my side not a real bug in ParMetis!



 On Mon, Apr 9, 2012 at 8:46 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 ok. Thanks Jed. I'll try petsc-dev to see if it fixes the problem.

 Thanks everyone.


 On Mon, Apr 9, 2012 at 8:42 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Mon, Apr 9, 2012 at 22:37, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Thanks Sean. I'm using Petsc 3.2-p6 along with ParMetis 4.0.2. Since
 this was not supported with 3.2-p6, and previous versions had bugs, I built
 parmetis myself and used --with-parmetis-include and --with-parmetis-lib
 flags to build petsc.

 Should I switch to petsc-dev?


 Yes, and use --download-metis --download-parmetis because the version
 upstream has some bugs for which the patches have not been applied.




-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120411/3f5db597/attachment.htm


[petsc-users] Strange error(?) message

2012-04-11 Thread Mohammad Mirzadeh
I see. Well the reason for non-symmetry is that I'm using a finite
difference discretization on quadtree AMR grids. When discretizing at a
hanging node, i.e. a node that is missing a direct neighbor, I use
 interpolation from nodes of the parent cell. When it comes to discretizing
at those nodes (i.e. nodes of the parent), however, they are on a coarser
grid and will have direct neighbors so they would not see the hanging node
in their discretization. This leads to a non-symmetric discretization
matrix which I was using (structurally) to define the adjacency matrix of
the graph. I think I can fix this by requiring the parent nodes to also
have a link to the hanging node (although they are not used in the
discretization anyway)

I'm just surprised that ParMetis does not complain about non-symmetric
graphs at lower resolutions and partitions them just fine!

As for the debugger, -start_in_debugger does not start the debugger for me
:(. I'm using mpirun -np 2 ./a.out -start_in_debugger. Is this the correct
way?




On Wed, Apr 11, 2012 at 3:07 PM, Matthew Knepley knepley at gmail.com wrote:

 On Wed, Apr 11, 2012 at 8:24 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 I was reading the FAQ list when I came across the following:

 http://www.mcs.anl.gov/petsc/documentation/faq.html#key

 When calling MatPartitioningApply() you get a message Error! Key 16615
 not found The graph of the matrix you are using is not symmetric. You
 must use symmetric matrices for partitioning.

 Is this a limitation on ParMetis side? I set up the adjacency matrix
 based on the discretization that I will be performing on the grid which is
 non-symmetric; both numerically and structurally. What's the solution here?
 Make an approximate adjacency matrix that sort of looks like
 (structurally) my discretization but is symmetric? What I don't understand
 is my matrix IS non-symmetric when the code runs on coarser grids!


 I don't quite understand how you can have a non-symmetric adjacency
 description. But Metis/ParMetis partitions undirected graphs, which
 by definition have symmetric adjacency matrices. Non-symmetric adjacency
 would seem to imply a directed graph of some sort, or more plainly,
 something is a adjacent to another thing which is not adjacent to it. That
 is a very strange concept.


 Also, I was reading the FAQ hoping I can find something regarding using
 gdb in parallel. I found this: http://scicomp.stackexchange.com/a/410/485but 
 I'm not sure how I should be using gdb in parallel. Could you (maybe
 Matt?) please explain a little bit?


 -start_in_debugger spawns a gdb windows for EVERY process and attaches it

 -debbuger_nodes a,b,c spawns gdb windows ONLY for ranks a, b, and c

   Thanks,

  Matt


 Thanks

 On Tue, Apr 10, 2012 at 12:08 AM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Just built petsc-dev and it did not help. I'm going to look into the
 code to see if my graph is ill-formed in some sense. Just hope the
 problem is from my side not a real bug in ParMetis!



 On Mon, Apr 9, 2012 at 8:46 PM, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 ok. Thanks Jed. I'll try petsc-dev to see if it fixes the problem.

 Thanks everyone.


 On Mon, Apr 9, 2012 at 8:42 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Mon, Apr 9, 2012 at 22:37, Mohammad Mirzadeh mirzadeh at 
 gmail.comwrote:

 Thanks Sean. I'm using Petsc 3.2-p6 along with ParMetis 4.0.2. Since
 this was not supported with 3.2-p6, and previous versions had bugs, I 
 built
 parmetis myself and used --with-parmetis-include and --with-parmetis-lib
 flags to build petsc.

 Should I switch to petsc-dev?


 Yes, and use --download-metis --download-parmetis because the version
 upstream has some bugs for which the patches have not been applied.







 --
 What most experimenters take for granted before they begin their
 experiments is infinitely more interesting than any results to which their
 experiments lead.
 -- Norbert Wiener

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120411/70bf228c/attachment.htm


[petsc-users] Strange error(?) message

2012-04-10 Thread Mohammad Mirzadeh
Just built petsc-dev and it did not help. I'm going to look into the code
to see if my graph is ill-formed in some sense. Just hope the problem is
from my side not a real bug in ParMetis!



On Mon, Apr 9, 2012 at 8:46 PM, Mohammad Mirzadeh mirzadeh at gmail.comwrote:

 ok. Thanks Jed. I'll try petsc-dev to see if it fixes the problem.

 Thanks everyone.


 On Mon, Apr 9, 2012 at 8:42 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Mon, Apr 9, 2012 at 22:37, Mohammad Mirzadeh mirzadeh at gmail.comwrote:

 Thanks Sean. I'm using Petsc 3.2-p6 along with ParMetis 4.0.2. Since
 this was not supported with 3.2-p6, and previous versions had bugs, I built
 parmetis myself and used --with-parmetis-include and --with-parmetis-lib
 flags to build petsc.

 Should I switch to petsc-dev?


 Yes, and use --download-metis --download-parmetis because the version
 upstream has some bugs for which the patches have not been applied.



-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120410/99e9ebfd/attachment.htm


[petsc-users] Global numbering obtained via ISPartitioningToNumbering is inconsistent with the partitioning indexset

2012-04-09 Thread Mohammad Mirzadeh
Hi,

I am using  'ISPartitioningToNumbering' to generate new global numbering
from a partitioning indexset and I'm baffled at the following situation.
I'm debugging my code on a simple grid consisting of 81 grid points
partitioned among two processes. When I look into the partitioning indexset
(i.e. looking at the indecies via ISView) I can see that 40 points have
been assigned to proc 0 and 41 to processor 1. Isn't it true that when 81
points are distributed among two processors 41 should go to proc 0 and 40
to proc 1?

I have based my whole code on the assumption (verified before through
mailing list i guess) that natural ordering in PETSc leads to a
distribution of points such that all processors get the same number of
points ( [0, n/p) on proc 0, [n/p, 2n/p) on proc 1, ... ) unless n%p != 0,
in which case the first k (with k = n%p) processors receive 1 point extra.
Am I wrong to assume this?

Thanks,
Mohammad

PS: Is it relevant that the partitioning indexset is obtained via
ParMetis?partitiong
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120409/1e5928c7/attachment.htm


[petsc-users] Global numbering obtained via ISPartitioningToNumbering is inconsistent with the partitioning indexset

2012-04-09 Thread Mohammad Mirzadeh
Aaah! Thanks Barry. Just to make sure though, is my assumption on the
natural ordering of PETSc correct?

Thanks

On Mon, Apr 9, 2012 at 3:10 PM, Barry Smith bsmith at mcs.anl.gov wrote:


 On Apr 9, 2012, at 5:06 PM, Mohammad Mirzadeh wrote:

  Hi,
 
  I am using  'ISPartitioningToNumbering' to generate new global numbering
 from a partitioning indexset and I'm baffled at the following situation.
 I'm debugging my code on a simple grid consisting of 81 grid points
 partitioned among two processes. When I look into the partitioning indexset
 (i.e. looking at the indecies via ISView) I can see that 40 points have
 been assigned to proc 0 and 41 to processor 1. Isn't it true that when 81
 points are distributed among two processors 41 should go to proc 0 and 40
 to proc 1?
 
  I have based my whole code on the assumption (verified before through
 mailing list i guess) that natural ordering in PETSc leads to a
 distribution of points such that all processors get the same number of
 points ( [0, n/p) on proc 0, [n/p, 2n/p) on proc 1, ... ) unless n%p != 0,
 in which case the first k (with k = n%p) processors receive 1 point extra.
 Am I wrong to assume this?
 
  Thanks,
  Mohammad
 

  PS: Is it relevant that the partitioning indexset is obtained via
 ParMetis?partitiong

Yes, ParMetis provides no guarantee about how many points would get
 assigned to each process.

   Barry



-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120409/a0fecbf8/attachment.htm


[petsc-users] Global numbering obtained via ISPartitioningToNumbering is inconsistent with the partitioning indexset

2012-04-09 Thread Mohammad Mirzadeh
Thanks Jed for the advice. Actually I looked at the code and I think this
was the only place I had made this implicit assumption (but the most
important place). As a matter of fact, I make my vectors layout according
to the AO that I get from partitioning. I think my main mistake was that I
had assumed the partitioning uses the same ordering as the PETSc ordering.

The code seems to run correctly now :)





On Mon, Apr 9, 2012 at 5:09 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Mon, Apr 9, 2012 at 17:25, Mohammad Mirzadeh mirzadeh at gmail.comwrote:

 Just to make sure though, is my assumption on the natural ordering of
 PETSc correct?


 Yes, but please don't write code that depends on that. It's plenty easy to
 query the local size or the sizes/starts of other processes.

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120409/4d31b0fa/attachment.htm


[petsc-users] Strange error(?) message

2012-04-09 Thread Mohammad Mirzadeh
Hi,

I have this strange behavior that sometimes when I run the code on certain
number of processors and for a certain problem size, I get a message like:

key # not found!

where # is just some number (like 9087 but changes every time I run it). Is
there something wrong with my MPI? is this even an MPI message or something
else? Sorry my question is this broad, but I have no clue where this
message is popping from! PETSc does not produce any error message when this
happens and the code is valgrind clean.

I'd appreciate any input.
Thanks
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120409/a8513677/attachment.htm


[petsc-users] Strange error(?) message

2012-04-09 Thread Mohammad Mirzadeh
um, as a matter of fact, it now remains the same number as long as I use
the same number of processors!

On Mon, Apr 9, 2012 at 6:10 PM, Mohammad Mirzadeh mirzadeh at gmail.comwrote:

 Hi,

 I have this strange behavior that sometimes when I run the code on certain
 number of processors and for a certain problem size, I get a message like:

 key # not found!

 where # is just some number (like 9087 but changes every time I run it).
 Is there something wrong with my MPI? is this even an MPI message or
 something else? Sorry my question is this broad, but I have no clue where
 this message is popping from! PETSc does not produce any error message when
 this happens and the code is valgrind clean.

 I'd appreciate any input.
 Thanks

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120409/17f29dec/attachment-0001.htm


[petsc-users] Strange error(?) message

2012-04-09 Thread Mohammad Mirzadeh
:( that's all I see. Do you mean running with -on_error_attach_debugger?

On Mon, Apr 9, 2012 at 7:29 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 Always send the whole error message. If this is all you see, then
 something is misbehaving and you'll have to catch the message in a debugger.


 On Mon, Apr 9, 2012 at 20:10, Mohammad Mirzadeh mirzadeh at gmail.comwrote:

 Hi,

 I have this strange behavior that sometimes when I run the code on
 certain number of processors and for a certain problem size, I get a
 message like:

 key # not found!

 where # is just some number (like 9087 but changes every time I run it).
 Is there something wrong with my MPI? is this even an MPI message or
 something else? Sorry my question is this broad, but I have no clue where
 this message is popping from! PETSc does not produce any error message when
 this happens and the code is valgrind clean.

 I'd appreciate any input.
 Thanks



-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120409/29abaee5/attachment.htm


[petsc-users] Strange error(?) message

2012-04-09 Thread Mohammad Mirzadeh
-on_error_attach_debugger does not start the debugger. I traced it upto the
following call:  MatPartitioningApply(). It runs fine up until that
function call and then gives the error.

Also, to make things even look more weird, I got the following once I was
running it:

Key 10176 not found!
INTERNAL ERROR: Invalid error class (66) encountered while returning from
PMPI_Waitall.  Please file a bug report.

Do you happen to know what that could mean? The funny thing is, it only
happened once!

On Mon, Apr 9, 2012 at 7:57 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Mon, Apr 9, 2012 at 21:56, Mohammad Mirzadeh mirzadeh at gmail.comwrote:

 :( that's all I see. Do you mean running with -on_error_attach_debugger?


 Sure, or directly in a debugger. Does execution make it to main? Something
 is either not catching errors or is improperly calling exit() or similar.

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120409/1d6d0f61/attachment.htm


[petsc-users] Strange error(?) message

2012-04-09 Thread Mohammad Mirzadeh
Barry,

Thanks a lot; I should definitely learn to use the debugger.

I'm not sure what makes the graph ill-formed in ParMetis terminology.
I'll keep looking into their manual to see if I can find anything. What
makes this even harder is the code runs fine for certain grids and then
this just happens as I refine the grid. I'll look into the function where I
set up the adjacency matrix to see if I'm doing something wrong ...

I'll keep you posted.



On Mon, Apr 9, 2012 at 8:25 PM, Barry Smith bsmith at mcs.anl.gov wrote:


  This is part of the excellent error handing in ParMetis (sarcasm
 intended), you would have found it rather quickly in the debugger.
 (ParMetis code follows)

 idx_t BSearch(idx_t n, idx_t *array, idx_t key)
 {
  idx_t a=0, b=n, c;

  while (b-a  8) {
c = (a+b)1;
if (array[c]  key)
  b = c;
else
  a = c;
  }

  for (c=a; cb; c++) {
if (array[c] == key)
  return c;
  }

  errexit(Key %PRIDX not found!\n, key);

  return 0;
 }

  So either it is a bug in ParMetis or ParMetis is being passed a
 ill-formed graph and not detecting the ill-formed graph cleanly.

   What to do? Well we can't spend out lives debugging other peoples
 libraries so . I am not sure what to do next? Validate that Parmetis is
 getting reasonable input?

   Barry


 On Apr 9, 2012, at 10:19 PM, Mohammad Mirzadeh wrote:

  -on_error_attach_debugger does not start the debugger. I traced it upto
 the following call:  MatPartitioningApply(). It runs fine up until that
 function call and then gives the error.
 
  Also, to make things even look more weird, I got the following once I
 was running it:
 
  Key 10176 not found!
  INTERNAL ERROR: Invalid error class (66) encountered while returning from
  PMPI_Waitall.  Please file a bug report.
 
  Do you happen to know what that could mean? The funny thing is, it only
 happened once!
 
  On Mon, Apr 9, 2012 at 7:57 PM, Jed Brown jedbrown at mcs.anl.gov wrote:
  On Mon, Apr 9, 2012 at 21:56, Mohammad Mirzadeh mirzadeh at gmail.com
 wrote:
  :( that's all I see. Do you mean running with -on_error_attach_debugger?
 
  Sure, or directly in a debugger. Does execution make it to main?
 Something is either not catching errors or is improperly calling exit() or
 similar.
 


-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120409/d78d2e25/attachment-0001.htm


[petsc-users] Strange error(?) message

2012-04-09 Thread Mohammad Mirzadeh
Thanks Sean. I'm using Petsc 3.2-p6 along with ParMetis 4.0.2. Since this
was not supported with 3.2-p6, and previous versions had bugs, I built
parmetis myself and used --with-parmetis-include and --with-parmetis-lib
flags to build petsc.

Should I switch to petsc-dev?

On Mon, Apr 9, 2012 at 8:27 PM, Sean Farley sean at mcs.anl.gov wrote:

 -on_error_attach_debugger does not start the debugger. I traced it upto
 the following call:  MatPartitioningApply(). It runs fine up until that
 function call and then gives the error.

 Also, to make things even look more weird, I got the following once I was
 running it:

 Key 10176 not found!
 INTERNAL ERROR: Invalid error class (66) encountered while returning from
 PMPI_Waitall.  Please file a bug report.

 Do you happen to know what that could mean? The funny thing is, it only
 happened once!


 This is a parmetis error from libparmetis/util.c:78 and not with petsc
 (well, at least, not directly an error from petsc). How did you build
 parmetis? Which parmetis version and which petsc version are you running?
 You'll probably have to send configure.log to petsc-maint at mcs.anl.gov.

 By the way, the relevant function that is having the error is:

 /*
 * This function does a binary search on an array for a key and returns
 * the index
 **/
 idx_t BSearch(idx_t n, idx_t *array, idx_t key)

 ... which doesn't help much unless you can step into the debugger and set
 a breakpoint here to give a traceback.

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120409/1886d7af/attachment.htm


[petsc-users] Strange error(?) message

2012-04-09 Thread Mohammad Mirzadeh
ok. Thanks Jed. I'll try petsc-dev to see if it fixes the problem.

Thanks everyone.

On Mon, Apr 9, 2012 at 8:42 PM, Jed Brown jedbrown at mcs.anl.gov wrote:

 On Mon, Apr 9, 2012 at 22:37, Mohammad Mirzadeh mirzadeh at gmail.comwrote:

 Thanks Sean. I'm using Petsc 3.2-p6 along with ParMetis 4.0.2. Since this
 was not supported with 3.2-p6, and previous versions had bugs, I built
 parmetis myself and used --with-parmetis-include and --with-parmetis-lib
 flags to build petsc.

 Should I switch to petsc-dev?


 Yes, and use --download-metis --download-parmetis because the version
 upstream has some bugs for which the patches have not been applied.

-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120409/e6015ba5/attachment.htm


[petsc-users] AOCreateMapping

2012-03-19 Thread Mohammad Mirzadeh
Hi,

When using AOCreateMapping, it is not possible to ask for mapping of
indecies that do not exist in the AO -- You get [0]PETSC ERROR: Argument
out of range! :

[0]PETSC ERROR: - Error Message

[0]PETSC ERROR: Argument out of range!
[0]PETSC ERROR: Invalid input index 21!
[0]PETSC ERROR:

[0]PETSC ERROR: Petsc Release Version 3.2.0, Patch 6, Wed Jan 11 09:28:45
CST 2012
[0]PETSC ERROR: See docs/changes/index.html for recent updates.
[0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
[0]PETSC ERROR: See docs/index.html for manual pages.
[0]PETSC ERROR:

[0]PETSC ERROR: ./petsc on a arch-linu named mohammad-laptop by mohammad
Mon Mar 19 17:16:13 2012
[0]PETSC ERROR: Libraries linked from
/home/mohammad/soft/petsc-3.2-p6/arch-linux2-cxx-debug/lib
[0]PETSC ERROR: Configure run at Thu Feb 16 02:16:40 2012
[0]PETSC ERROR: Configure options --with-cc=gcc --with-cxx=g++
--with-fc=gfortran --with-clanguage=cxx --download-f-blas-lapack=1
--download-mpich=1 --download-hypre=1 --download-ml=1
--with-parmetis-include=/home/mohammad/soft/parmetis/include
--with-parmetis-lib=-L/home/mohammad/soft/parmetis/lib -lparmetis -lmetis
--download-superlu_dist=1
[0]PETSC ERROR:

[0]PETSC ERROR: AOApplicationToPetsc_Mapping() line 136 in
/home/mohammad/soft/petsc-3.2-p6/src/dm/ao/impls/mapping/aomapping.c
[0]PETSC ERROR: AOApplicationToPetsc() line 250 in
/home/mohammad/soft/petsc-3.2-p6/src/dm/ao/interface/ao.c


Is there an easy way to have the AO return a flag integer (like -1) for
nodes that do not exist in the AO?

Mohammad
-- next part --
An HTML attachment was scrubbed...
URL: 
http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120319/32ffa7d4/attachment.htm


  1   2   >