Re: [petsc-users] Calling MATLAB code within a parallel C/C++ code

2020-06-11 Thread Amneet Bhalla
This is so cool! Thank you for providing a gateway to MATLAB via PETSc. On Thu, Jun 11, 2020 at 6:21 PM Barry Smith wrote: > > > On Jun 11, 2020, at 2:47 PM, Amneet Bhalla wrote: > > Ok, I'm able to call and use MATLAB functionality from PETSc. If I'm > understanding correctly, when

Re: [petsc-users] matcreate and assembly issue

2020-06-11 Thread Karl Lin
After recompiling with 64bit option, the program ran successfully. Thank you very much for the insight. On Thu, Jun 11, 2020 at 12:00 PM Satish Balay wrote: > On Thu, 11 Jun 2020, Karl Lin wrote: > > > Hi, Matthew > > > > Thanks for the suggestion, just did another run and here are some >

Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-11 Thread Jed Brown
Danyang Su writes: > Hi Barry, > > The HDF5 calls fail. I reconfigure PETSc with HDF 1.10.5 version and it works > fine on different platforms. So, it is more likely there is a bug in the > latest HDF version. I would double-check that you have not subtly violated a collective requirement in

Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-11 Thread Danyang Su
Hi Barry, The HDF5 calls fail. I reconfigure PETSc with HDF 1.10.5 version and it works fine on different platforms. So, it is more likely there is a bug in the latest HDF version. Thanks. All the best, Danyang On June 11, 2020 5:58:28 a.m. PDT, Barry Smith wrote: > >Are you making HDF5

Re: [petsc-users] Calling MATLAB code within a parallel C/C++ code

2020-06-11 Thread Barry Smith
> On Jun 11, 2020, at 2:47 PM, Amneet Bhalla wrote: > > Ok, I'm able to call and use MATLAB functionality from PETSc. If I'm > understanding correctly, when PetscMatlabEngineCreate() is called, a MATLAB > workspace is created, that persists until PetscMatlabEngineDestroy() is > called.

Re: [petsc-users] TAO STCG initial perturbation

2020-06-11 Thread Jed Brown
"Dener, Alp via petsc-users" writes: > About Levenberg-Marquardt: a user started the branch to eventually contribute > an LM solver, but I have not heard any updates on it since end of April. For > least-squares type problems, you can try using the regularized Gauss-Newton > solver (-tao_type

Re: [petsc-users] L infinity norm convergence tests

2020-06-11 Thread Jed Brown
Matthew Knepley writes: > On Wed, Jun 10, 2020 at 9:02 AM Mark Lohry wrote: > >> Hi all, is there a built-in way to use L-infinity norms for the SNES and >> KSP convergence tests, or do I need to write a manual KSPSetConvergenceTest >> function? >> > > You need to write a custom test. Note

Re: [petsc-users] Calling MATLAB code within a parallel C/C++ code

2020-06-11 Thread Amneet Bhalla
Ok, I'm able to call and use MATLAB functionality from PETSc. If I'm understanding correctly, when PetscMatlabEngineCreate() is called, a MATLAB workspace is created, that persists until PetscMatlabEngineDestroy() is called. PETSc can access/put/manipulate variables in this workspace, and can also

Re: [petsc-users] TAO STCG initial perturbation

2020-06-11 Thread Dener, Alp via petsc-users
Hi Zak, Gauss-Newton finds the least-squares solution of overdetermined systems, e.g. nonlinear regression. It minimizes the squared L2-norm of a nonlinear residual ||r(x)||_2^2 where the Jacobian J = dr/dx is rectangular with full column rank. Since this J is not invertible, Gauss-Newton uses

Re: [petsc-users] TAO STCG initial perturbation

2020-06-11 Thread zakaryah .
Hi Alp, Thanks for the help. Quasi-Newton seems promising - the Tao solver eventually converges, sometimes after hundreds or even thousands of iterations, with each iterate proceeding very quickly thanks to not evaluating the Hessian. I have only tried this with the problem set up as a general

Re: [petsc-users] matcreate and assembly issue

2020-06-11 Thread Matthew Knepley
On Thu, Jun 11, 2020 at 12:52 PM Karl Lin wrote: > Hi, Matthew > > Thanks for the suggestion, just did another run and here are some detailed > stack traces, maybe will provide some more insight: > *** Process received signal *** > Signal: Aborted (6) > Signal code: (-6) >

Re: [petsc-users] matcreate and assembly issue

2020-06-11 Thread Satish Balay via petsc-users
On Thu, 11 Jun 2020, Karl Lin wrote: > Hi, Matthew > > Thanks for the suggestion, just did another run and here are some detailed > stack traces, maybe will provide some more insight: > *** Process received signal *** > Signal: Aborted (6) > Signal code: (-6) >

Re: [petsc-users] matcreate and assembly issue

2020-06-11 Thread Karl Lin
Hi, Matthew Thanks for the suggestion, just did another run and here are some detailed stack traces, maybe will provide some more insight: *** Process received signal *** Signal: Aborted (6) Signal code: (-6) /lib64/libpthread.so.0(+0xf5f0)[0x2b56c46dc5f0] [ 1]

Re: [petsc-users] matcreate and assembly issue

2020-06-11 Thread Matthew Knepley
On Thu, Jun 11, 2020 at 11:51 AM Karl Lin wrote: > Hi, there > > We have written a program using Petsc to solve large sparse matrix system. > It has been working fine for a while. Recently we encountered a problem > when the size of the sparse matrix is larger than 10TB. We used several >

[petsc-users] matcreate and assembly issue

2020-06-11 Thread Karl Lin
Hi, there We have written a program using Petsc to solve large sparse matrix system. It has been working fine for a while. Recently we encountered a problem when the size of the sparse matrix is larger than 10TB. We used several hundred nodes and 2200 processes. The program always crashes during

Re: [petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-11 Thread Barry Smith
Are you making HDF5 calls that fail or is it PETSc routines calling HDF5 that fail? Regardless it sounds like the easiest fix is to switch back to the previous HDF5 and wait for HDF5 to fix what sounds to be a bug. Barry > On Jun 11, 2020, at 1:05 AM, Danyang Su wrote: > > Hi All,

Re: [petsc-users] A series of GPU questions

2020-06-11 Thread Mark Adams
> > >> >> Would we instead just have 40 (or perhaps slightly fewer) MPI processes >> all sharing the GPUs? Surely this would be inefficient, and would PETSc >> distribute the work across all 4 GPUs, or would every process end out using >> a single GPU? >> > See >

[petsc-users] FW: Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-11 Thread Danyang Su
Hi All, Sorry to send the previous incomplete email accidentally. After updating to HDF5-1.12.0, I got some problem if some processors have no data to write or not necessary to write. Since parallel writing is collective, I cannot disable those processors from writing. For the old

[petsc-users] Parallel writing in HDF5-1.12.0 when some processors have no data to write

2020-06-11 Thread Danyang Su
Hi All, After updating to HDF5-1.12.0, I got some problem if some processors have no data to write or not necessary to write. Since parallel writing is collective, I cannot disable those processors from writing. For the old version, there seems no such problem. So far, the problem only