Thanks for the quick response,

so it seems that there is currently not a straight forward way to add fortran 
solvers to petsc.


I will also have a look how I to add a new solver in the petsc source code.

I wanted to test some ideas about a modified approach for gram schmidt 
orthogonalisation.
It is called “windowed” orthogonalisation. The basic idea is as follows:
One first starts gram schmidt as usual, until we have e.g. 20 krylov vectors.

For the next iterations, where it gets more expensive, one does not 
orthogonalize the new vector
against all previous vectors but only the last 20 ones.  As we start with the 
first 20 ones
as usual one has a good basis that mainly covers the lower frequencies.
(is that also your experience that the first krylov vectors are more in the low 
frequency range ?!
I still have to check this myself)

As for higher iterations we only consider the last 20 vectors for gam schmidt, 
it makes it cheaper
(that is the “window”).  

As we do not want to have too large orthogonalization errors, after another 20 
iterations
one makes a restart using deflation where we extract the main eigenvectors from 
the existing solution so far,
so we have a new full orthogonal basis after lets say 40 iterations. 

For that one can probably use the approach from DGMRES.   So one could simply 
modify DMRES, 
respectively the gram-schmidt algorithm to allow for a more flexible way how 
many vectors to consider.
I think one basically just has to modify the start index in gram schmidt to 
allow not to use all vectors
but just the last lets say 20 vectors for orthogonalization.

There is a paper and matlab implementation where they discuss this approach (I 
have to look up where I found it).

Does that approach sound to have some potential to you to make gram schmidt 
cheaper ?


The other thing that I wanted to check is the least squares givens rotation. It 
seems that in the gingko linear solver
they are reusing certain components within the givens rotation that could 
potentially make it a bit faster.
I have to look into the details again.

What I do to determine how they do it in gingko is, that I let claude.ai crawl 
petsc code and gingko code.
Then the AI can explain me the differences and show me in detail where are the 
differences between the codes,
so I can take over the best from different codes and the AI can often explain 
me quite well what the code does
(at least that is the plan)


Greetings, Frank







  



 





> On 23 Jan 2025, at 04:58, Barry Smith <bsm...@petsc.dev> wrote:
> 
> 
>   I think it is best to code the modified GMRES method in C; likely, much of 
> the current GMRES code could be reused. We'd be happy to help incorporate it 
> into PETSc.
> 
>   Barry
> 
> 
>> On Jan 22, 2025, at 5:11 PM, Matthew Knepley <knep...@gmail.com> wrote:
>> 
>> On Wed, Jan 22, 2025 at 3:18 PM Frank Bramkamp <bramk...@nsc.liu.se 
>> <mailto:bramk...@nsc.liu.se>> wrote:
>>> Dear PETSc team,
>>> 
>>> I was planning to program a custom KSP method, some modified GMRES.
>>> We mainly use PETSc from Fortran. Therefore I wonder it is possible
>>> to have an interface to a custom KSP solver that is written in fortran.
>>> 
>>> I thought of using KSPRegister to register my own routine, but that seems 
>>> only
>>> available in C.  Or is it possible to have a fortran/C wrapper to do that ?
>> 
>> We have wrappers for other functions that take callbacks, such as 
>> SNESSetFunction(). What
>> we need to do is have a list of Fortran function pointers for this method. 
>> They when you
>> register, we actually stick in a C wrapper that calls your Fortran function 
>> pointer that we have
>> stored in our list. It should be straightforward looking at the 
>> implementation for something like
>> SNESSetFunction(). We would help if you want to try :)
>> 
>> Barry, is this impacted by your binding rewrite?
>> 
>>   Thanks,
>> 
>>       Matt
>>  
>>> Thanks, Frank Bramkamp
>>> 
>>> 
>>> 
>>> 
>>> 
>> 
>> 
>> 
>> --
>> What most experimenters take for granted before they begin their experiments 
>> is infinitely more interesting than any results to which their experiments 
>> lead.
>> -- Norbert Wiener
>> 
>> https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZRnbvVdIrLWKaLXQZRVItrhdMLGS8sF3iOVpHxMU0Ho3S-cvD4g8uNCzryCDYgnJU1NqStZ6SLQxGHEhobo_mwWyYA$
>>   
>> <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!eRXZvRoV2JfqzeOdQlhP6UA71kWfULNX_F1C0-Fer5IItdUKkmstwIO3N1VrmApHJYGGisuS6EyybdCUXUwi$>
> 

Reply via email to