This is so cool! Thank you for providing a gateway to MATLAB via PETSc.
On Thu, Jun 11, 2020 at 6:21 PM Barry Smith wrote:
>
>
> On Jun 11, 2020, at 2:47 PM, Amneet Bhalla wrote:
>
> Ok, I'm able to call and use MATLAB functionality from PETSc. If I'm
> understanding correctly, when
After recompiling with 64bit option, the program ran successfully. Thank
you very much for the insight.
On Thu, Jun 11, 2020 at 12:00 PM Satish Balay wrote:
> On Thu, 11 Jun 2020, Karl Lin wrote:
>
> > Hi, Matthew
> >
> > Thanks for the suggestion, just did another run and here are some
>
Danyang Su writes:
> Hi Barry,
>
> The HDF5 calls fail. I reconfigure PETSc with HDF 1.10.5 version and it works
> fine on different platforms. So, it is more likely there is a bug in the
> latest HDF version.
I would double-check that you have not subtly violated a collective requirement
in
Hi Barry,
The HDF5 calls fail. I reconfigure PETSc with HDF 1.10.5 version and it works
fine on different platforms. So, it is more likely there is a bug in the latest
HDF version.
Thanks.
All the best,
Danyang
On June 11, 2020 5:58:28 a.m. PDT, Barry Smith wrote:
>
>Are you making HDF5
> On Jun 11, 2020, at 2:47 PM, Amneet Bhalla wrote:
>
> Ok, I'm able to call and use MATLAB functionality from PETSc. If I'm
> understanding correctly, when PetscMatlabEngineCreate() is called, a MATLAB
> workspace is created, that persists until PetscMatlabEngineDestroy() is
> called.
"Dener, Alp via petsc-users" writes:
> About Levenberg-Marquardt: a user started the branch to eventually contribute
> an LM solver, but I have not heard any updates on it since end of April. For
> least-squares type problems, you can try using the regularized Gauss-Newton
> solver (-tao_type
Matthew Knepley writes:
> On Wed, Jun 10, 2020 at 9:02 AM Mark Lohry wrote:
>
>> Hi all, is there a built-in way to use L-infinity norms for the SNES and
>> KSP convergence tests, or do I need to write a manual KSPSetConvergenceTest
>> function?
>>
>
> You need to write a custom test.
Note
Ok, I'm able to call and use MATLAB functionality from PETSc. If I'm
understanding correctly, when PetscMatlabEngineCreate() is called, a
MATLAB workspace is created, that persists until PetscMatlabEngineDestroy()
is called. PETSc can access/put/manipulate variables in this workspace, and
can also
Hi Zak,
Gauss-Newton finds the least-squares solution of overdetermined systems, e.g.
nonlinear regression. It minimizes the squared L2-norm of a nonlinear residual
||r(x)||_2^2 where the Jacobian J = dr/dx is rectangular with full column rank.
Since this J is not invertible, Gauss-Newton uses
Hi Alp,
Thanks for the help. Quasi-Newton seems promising - the Tao solver
eventually converges, sometimes after hundreds or even thousands of
iterations, with each iterate proceeding very quickly thanks to not
evaluating the Hessian. I have only tried this with the problem set up as a
general
On Thu, Jun 11, 2020 at 12:52 PM Karl Lin wrote:
> Hi, Matthew
>
> Thanks for the suggestion, just did another run and here are some detailed
> stack traces, maybe will provide some more insight:
> *** Process received signal ***
> Signal: Aborted (6)
> Signal code: (-6)
>
On Thu, 11 Jun 2020, Karl Lin wrote:
> Hi, Matthew
>
> Thanks for the suggestion, just did another run and here are some detailed
> stack traces, maybe will provide some more insight:
> *** Process received signal ***
> Signal: Aborted (6)
> Signal code: (-6)
>
Hi, Matthew
Thanks for the suggestion, just did another run and here are some detailed
stack traces, maybe will provide some more insight:
*** Process received signal ***
Signal: Aborted (6)
Signal code: (-6)
/lib64/libpthread.so.0(+0xf5f0)[0x2b56c46dc5f0]
[ 1]
On Thu, Jun 11, 2020 at 11:51 AM Karl Lin wrote:
> Hi, there
>
> We have written a program using Petsc to solve large sparse matrix system.
> It has been working fine for a while. Recently we encountered a problem
> when the size of the sparse matrix is larger than 10TB. We used several
>
Hi, there
We have written a program using Petsc to solve large sparse matrix system.
It has been working fine for a while. Recently we encountered a problem
when the size of the sparse matrix is larger than 10TB. We used several
hundred nodes and 2200 processes. The program always crashes during
Are you making HDF5 calls that fail or is it PETSc routines calling HDF5 that
fail?
Regardless it sounds like the easiest fix is to switch back to the previous
HDF5 and wait for HDF5 to fix what sounds to be a bug.
Barry
> On Jun 11, 2020, at 1:05 AM, Danyang Su wrote:
>
> Hi All,
>
>
>>
>> Would we instead just have 40 (or perhaps slightly fewer) MPI processes
>> all sharing the GPUs? Surely this would be inefficient, and would PETSc
>> distribute the work across all 4 GPUs, or would every process end out using
>> a single GPU?
>>
> See
>
Hi All,
Sorry to send the previous incomplete email accidentally.
After updating to HDF5-1.12.0, I got some problem if some processors have no
data to write or not necessary to write. Since parallel writing is collective,
I cannot disable those processors from writing. For the old
Hi All,
After updating to HDF5-1.12.0, I got some problem if some processors have no
data to write or not necessary to write. Since parallel writing is collective,
I cannot disable those processors from writing. For the old version, there
seems no such problem. So far, the problem only
19 matches
Mail list logo