Re: [petsc-users] Suspicious long call to VecAXPY

2017-01-06 Thread Barry Smith
The second one should absolutely be slower than the first (because it actually iterations through the indices you pass in with an indirection) and the first should not get slower the more you run it. Depending on your environment I recommend you using a profiling tool on the code and

Re: [petsc-users] Suspicious long call to VecAXPY

2017-01-06 Thread Dave May
On 6 January 2017 at 22:31, Łukasz Kasza wrote: > > > Dear PETSc Users, > > Please consider the following 2 snippets which do exactly the same > (calculate a sum of two vectors): > 1. > VecAXPY(amg_level_x[level],1.0,amg_level_residuals[level]); > >

Re: [petsc-users] Suspicious long call to VecAXPY

2017-01-06 Thread Matthew Knepley
On Fri, Jan 6, 2017 at 4:31 PM, Łukasz Kasza wrote: > > > Dear PETSc Users, > > Please consider the following 2 snippets which do exactly the same > (calculate a sum of two vectors): > 1. > VecAXPY(amg_level_x[level],1.0,amg_level_residuals[level]); > >

[petsc-users] Suspicious long call to VecAXPY

2017-01-06 Thread Łukasz Kasza
Dear PETSc Users, Please consider the following 2 snippets which do exactly the same (calculate a sum of two vectors): 1. VecAXPY(amg_level_x[level],1.0,amg_level_residuals[level]); 2. VecGetArray(amg_level_residuals[level], );

Re: [petsc-users] Best way to scatter a Seq vector ?

2017-01-06 Thread Manuel Valera
Awesome, that did it, thanks once again. On Fri, Jan 6, 2017 at 1:53 PM, Barry Smith wrote: > >Take the scatter out of the if () since everyone does it and get rid of > the VecView(). > >Does this work? If not where is it hanging? > > > > On Jan 6, 2017, at 3:29 PM,

Re: [petsc-users] Best way to scatter a Seq vector ?

2017-01-06 Thread Barry Smith
Take the scatter out of the if () since everyone does it and get rid of the VecView(). Does this work? If not where is it hanging? > On Jan 6, 2017, at 3:29 PM, Manuel Valera wrote: > > Thanks Dave, > > I think is interesting it never gave an error on this,

Re: [petsc-users] Best way to scatter a Seq vector ?

2017-01-06 Thread Manuel Valera
Thanks Dave, I think is interesting it never gave an error on this, after adding the vecassembly calls it still shows the same behavior, without complaining, i did: if(rankl==0)then call VecSetValues(bp0,nbdp,ind,Rhs,INSERT_VALUES,ierr) call VecAssemblyBegin(bp0,ierr) ; call

Re: [petsc-users] Best way to scatter a Seq vector ?

2017-01-06 Thread Dave May
On 6 January 2017 at 20:24, Manuel Valera wrote: > Great help Barry, i totally had overlooked that option (it is explicit in > the vecscatterbegin call help page but not in vecscattercreatetozero, as i > read later) > > So i used that and it works partially, it scatters te

Re: [petsc-users] Best way to scatter a Seq vector ?

2017-01-06 Thread Manuel Valera
Great help Barry, i totally had overlooked that option (it is explicit in the vecscatterbegin call help page but not in vecscattercreatetozero, as i read later) So i used that and it works partially, it scatters te values assigned in root but not the rest, if i call vecscatterbegin from outside

Re: [petsc-users] a question on DMPlexSetAnchors

2017-01-06 Thread Rochan Upadhyay
Yes, the option MatSetOption(M, MAT_NEW_NONZERO_LOCATION_ERR, PETSC_FALSE) seems to be the path of least resistance. Especially as it is something I am doing out of my own curiosity and not part of anything larger. I might have to bug you again very soon on how to optimize or move forward based on

Re: [petsc-users] Fieldsplit with sub pc MUMPS in parallel

2017-01-06 Thread Barry Smith
Great, you should be now about to remove the extra options I had you add. > -fieldsplit_0_ksp_type gmres -fieldsplit_0_ksp_pc_side right > -fieldsplit_1_ksp_type gmres -fieldsplit_1_ksp_pc_side right) > On Jan 6, 2017, at 5:17 AM, Karin wrote: > > Barry, > > you

Re: [petsc-users] [SOLVED] make test freeze

2017-01-06 Thread Matthew Knepley
On Fri, Jan 6, 2017 at 10:08 AM, Patrick Begou < patrick.be...@legi.grenoble-inp.fr> wrote: > Hi Matthew, > > Using the debuguer I finaly found the problem. It is related to MPI. In > src/sys/objects/pinit.c line 779, petsc test the availability of > PETSC_HAVE_MPI_INIT_THREAD and this is set to

Re: [petsc-users] a question on DMPlexSetAnchors

2017-01-06 Thread Matthew Knepley
On Fri, Jan 6, 2017 at 8:52 AM, Rochan Upadhyay wrote: > Constraints come from so-called cohomology conditions. In practical > applications, > they arise when you couple field models (e.g. Maxwell's equations) with > lumped > models (e.g. circuit equations). They are

Re: [petsc-users] problems after glibc upgrade to 2.17-157

2017-01-06 Thread Satish Balay
On Fri, 6 Jan 2017, Klaij, Christiaan wrote: > Satish, > > Our sysadmin is not keen on downgrading glibc. sure > I'll stick with "--with-shared-libraries=0" for now thats fine. > and wait for SL7.3 with intel 17. Well they are not related so if you can - you should upgrade to intel-17

Re: [petsc-users] [SOLVED] make test freeze

2017-01-06 Thread Patrick Begou
Hi Matthew, Using the debuguer I finaly found the problem. It is related to MPI. In src/sys/objects/pinit.c line 779, petsc test the availability of PETSC_HAVE_MPI_INIT_THREAD and this is set to True beccause my OpenMPI version is compiled with --enable-mpi-thread-multiple. However the call

Re: [petsc-users] TSPseudo overriding SNES iterations

2017-01-06 Thread Jed Brown
"Mark W. Lohry" writes: > I have an unsteady problem I'm trying to solve for steady state. The regular > time-accurate stepping works fine (uses around 5 Newton iterations with 100 > krylov iterations each per time step) with beuler stepping. > > > But when changing only

[petsc-users] malconfigured gamg

2017-01-06 Thread Arne Morten Kvarving
hi, first, this was an user error and i totally acknowledge this, but i wonder if this might be an oversight in your error checking: if you configure gamg with ilu/asm smoothing, and are stupid enough to have set the number of smoother cycles to 0, your program churns along and apparently

Re: [petsc-users] a question on DMPlexSetAnchors

2017-01-06 Thread Rochan Upadhyay
Constraints come from so-called cohomology conditions. In practical applications, they arise when you couple field models (e.g. Maxwell's equations) with lumped models (e.g. circuit equations). They are described in this paper : http://gmsh.info/doc/preprints/gmsh_homology_preprint.pdf In their

Re: [petsc-users] make test freeze

2017-01-06 Thread Patrick Begou
It is not the first time I have this problem and my aim was now to try to solve it instead of ignoring tests. The environment seams coherent (see below). I'll try to run in debug mode to investigate where the code hangs. Patrick [begou@kareline tutorials]$ make ex19 *mpicc* -o ex19.o -c -Wall

Re: [petsc-users] make test freeze

2017-01-06 Thread Matthew Knepley
On Fri, Jan 6, 2017 at 2:39 AM, Patrick Begou < patrick.be...@legi.grenoble-inp.fr> wrote: > Hi Matthew, > > Launching manualy ex19 shows only one process consuming cpu time, after > 952mn I've killed the job this morning. > > [begou@kareline tutorials]$ make ex19 > mpicc -o ex19.o -c -Wall

Re: [petsc-users] make test freeze

2017-01-06 Thread Patrick Begou
Hi Matthew, Launching manualy ex19 shows only one process consuming cpu time, after 952mn I've killed the job this morning. [begou@kareline tutorials]$ make ex19 mpicc -o ex19.o -c -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fvisibility=hidden -g3

Re: [petsc-users] problems after glibc upgrade to 2.17-157

2017-01-06 Thread Klaij, Christiaan
Satish, Our sysadmin is not keen on downgrading glibc. I'll stick with "--with-shared-libraries=0" for now and wait for SL7.3 with intel 17. Thanks for filing the bugreport at RHEL, very curious to see their response. Chris dr. ir. Christiaan Klaij | CFD Researcher | Research & Development