%. does not use SVD.  It does do QR decomposition followed by inverse.

You can call LAPACK to perform the SVD, but you might need to delete columns with low singular values.

Which routine you need to use depends on details of what you are going to do.

Henry Rich

On 4/29/2021 2:11 PM, Imre Patyi wrote:
Thank you for the response.

No, I am doing linear least squares.
The problem is to determine the complex coefficients c_j in the least
squares solution of

f(x_i)=\sum_{j=-10}^{10} \exp(\sqrt{-1} j x_i) c_j=\exp(x_i),
i=0,1,...,999.

The little programs that I included are correct, but J does not compute the
right sort of numbers.  Its answer for  c  is several orders of magnitude
wrong.
The design matrix  X  is a 1000 by 21 complex matrix, and we need to form
the solution  c  of  Xc=b,  where the components of b are b_i=\exp(x_i).
In principle, c=(X'X)^{-1}X'b, where ()' is the Hermitian transpose.
This is the quantity that the J command  c=: b %. X
is supposed to compute (using a clever algorithm such as QR or SVD
decomposition with Householder reflections and/or Givens rotations;
at least that is how it works in Octave/Matlab).

The backslash command of Octave solves a linear least squares problem and
so does the command np.linalg.lstsq() of numpy.

The above linear least squares problem (like most of the more useful ones)
is ill-conditioned, that is why the algorithm for %. must be very careful.
J also struggles with a smaller instance of the same problem with the 10
replaced by 5 (i.e., 11 sinusoidal functions rather than 21).  It can do
OK, when I replace 10 by 3 (i.e., 7 sinusoidal functions).

A careful QR-solver in double precision should be able to solve an
ill-conditioned
linear least squares problem (e.g., condition number about 1e18) in a few
hundred
unknowns.  The above matrix  X  with 21 columns has condition number
about 6.8562e+15,
and accordingly, Octave and Python compute an almost bit-perfect solution
for the interpolating
function.

Can anyone tell me which routine should I call in the J addon for LAPACK in
order
to solve a linear least squares problem such as I mentioned?  The command
%.
does not seem to do it.  (%. might have some glitch with complex numbers,
I could use it in similar sized problems with real numbers. The doc page

file:///C:/program%20files/j64-805/addons/docs/help/dictionary/d131.htm

mentions that the Hermitian quadratic form +/d*+d  is minimized, so it
should work
for complex numbers.)

Thank you very much!

Yours sincerely,
Imre Patyi


On Thu, Apr 29, 2021 at 1:54 AM Roger Hui <[email protected]> wrote:

%. is for _linear_ least squares, so you have to convert a non-linear model
to a linear one, if you can.  (If you can't, %. won't help you.)  Your
model is:

    y = a * ^ b*x

Therefore:

    (^.y) = (^.a) + (^. ^ b*x)
    (^.y) = (^.a) + (b*x)

So for the data to be fit, compute c=: (^.y) %. 1,.x , whence a=^{.c and
b={:c.



On Wed, Apr 28, 2021 at 9:17 PM Imre Patyi <[email protected]> wrote:

Dear [email protected] ,

I have a few years' worth of toy experience with J.
I was playing with the toy problem of fitting a trigonometric
polynomial with complex coefficients to the curve y=e^x,
-1/2 <=x<= 1/2 through 1000 sample points.
I tried using the linear least squares command %.  but it
seems to me that it gives coefficients  c  several orders of
magnitude larger than what seems right.

Could you please tell me what I would need to change to fix the
following little J program (e.g., how to replace %. with a call
to LAPACK)?   (I am thinking that J might not have a
numerically stable QR decomposition or SVD behind %.  --- I tried to
find the source code file for %. and only found

https://github.com/jsoftware/jsource/blob/master/jsrc/vd.c

which talks about qr decomposition and a function qrr(), but
I did not spend too much time on trying to locate the source code
for qrr() ).

You can see from the comparable Octave and Python versions that
the result can be much more accurate than what I got out of J
for this ill-conditioned problem.  I also tried it on a raspberry
pi with a newer version of J, but it was still several orders
of magnitude wrong.

Thank you very much!

Yours sincerely,
Imre Patyi

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

x=:_0.5+(i.1000)%999
c=:(^x) %. X=:(^ 0j1 * x */ i:10)
(i.5){"1(i.4){X
,.c

NB. Sample run.  Windows 10, i5, J805.
NB.   x=:_0.5+(i.1000)%999
NB.   c=:(^x) %. X=:(^ 0j1 * x */ i:10)
NB.   (i.5){"1(i.4){X
NB. 0.283662j_0.958924  _0.210796j_0.97753 _0.653644j_0.756802
_0.936457j_0.350783  _0.989992j0.14112
NB. 0.274049j_0.961716 _0.219594j_0.975591 _0.659683j_0.751544
_0.938892j_0.344213 _0.989127j0.147063
NB. 0.264409j_0.964411 _0.228374j_0.973574  _0.66568j_0.746237
  _0.94128j_0.337626 _0.988226j0.153001
NB. 0.254742j_0.967009 _0.237135j_0.971477 _0.671635j_0.740882
_0.943623j_0.331022 _0.987289j0.158934
NB.   ,.c
NB. 4.05363e6j_2.63539e6
NB. _5.04244e7j3.27836e7
NB. 2.90519e8j_1.88887e8
NB. _1.02273e9j6.64963e8
NB. 2.44111e9j_1.58715e9
NB. _4.13678e9j2.6895e9
NB. 5.05136e9j_3.2837e9
NB. _4.39569e9j2.8568e9
NB. 2.60811e9j_1.69431e9
NB. _9.47205e8j6.14882e8
NB. 1.5859e8j_1.02848e8
NB. _1.84539e7j1.19961e7
NB. 7.13468e7j_4.62898e7
NB. _1.82918e8j1.19028e8
NB. 3.31731e8j_2.16376e8
NB. _3.96625e8j2.58975e8
NB. 3.10552e8j_2.02872e8
NB. _1.60195e8j1.04678e8
NB. 5.30711e7j_3.46859e7
NB. _1.03281e7j6.75135e6
NB.      904018j_591051

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

x=linspace(-0.5,0.5,1000)' ; X=exp(1i*x.*(-10:10)) ;
X(1:4,1:5)
c=X\exp(x)
norm(X*c-exp(x),"inf")
f=@(x)exp(1i*x.*(-10:10))*c ;
y=linspace(-0.5,0.5,1e6)';norm(f(y)-exp(y),"inf")

%{
Sample run.
GNU Octave, version 5.2.0
Copyright (C) 2020 John W. Eaton and others.
Octave was configured for "x86_64-w64-mingw32".

x=linspace(-0.5,0.5,1000)' ; X=exp(1i*x.*(-10:10)) ;
X(1:4,1:5)
ans =

    0.28366 - 0.95892i  -0.21080 - 0.97753i  -0.65364 - 0.75680i
-0.93646 -
0.35078i  -0.98999 + 0.14112i
    0.27405 - 0.96172i  -0.21959 - 0.97559i  -0.65968 - 0.75154i
-0.93889 -
0.34421i  -0.98913 + 0.14706i
    0.26441 - 0.96441i  -0.22837 - 0.97357i  -0.66568 - 0.74624i
-0.94128 -
0.33763i  -0.98823 + 0.15300i
    0.25474 - 0.96701i  -0.23714 - 0.97148i  -0.67163 - 0.74088i
-0.94362 -
0.33102i  -0.98729 + 0.15893i

c=X\exp(x)
c =

   -0.000034734 + 0.000054543i
    0.000612095 - 0.000907876i
   -0.005135034 + 0.007102460i
    0.027233003 - 0.034494175i
   -0.102184601 + 0.115336800i
    0.287266336 - 0.276216782i
   -0.622606031 + 0.467755153i
    1.041169968 - 0.487260179i
   -1.247452051 + 0.015246341i
    0.484987168 + 0.987560834i
    1.412953756 - 0.175021510i
   -0.003565228 - 0.668242391i
   -0.507142432 - 0.257223380i
    0.346131942 + 0.638760179i
   -0.148113496 - 0.545360752i
    0.042654051 + 0.308242889i
   -0.006955253 - 0.125735396i
   -0.000093980 + 0.037054597i
    0.000344207 - 0.007552276i
   -0.000075749 + 0.000958155i
    0.000006065 - 0.000057233i

norm(X*c-exp(x),"inf")
ans =    4.6574e-15
f=@(x)exp(1i*x.*(-10:10))*c ;
y=linspace(-0.5,0.5,1e6)';norm(f(y)-exp(y),"inf")
ans =    5.2132e-15
%}

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

import numpy as np
x=np.reshape(np.linspace(-0.5,0.5,1000),(1000,1))
n=np.reshape(np.arange(-10,1+10),(1,21))
X=np.exp((0+1j)*x.dot(n))
c=np.linalg.lstsq(X,np.exp(x))[0]
np.linalg.norm(X.dot(c)-np.exp(x),ord=np.inf)
y=np.random.rand(10**6,1)-0.5
np.linalg.norm(np.exp((0+1j)*y.dot(n)).dot(c)-np.exp(y),ord=np.inf)

"""
Sample run.
Python 3.5.2 |Anaconda custom (64-bit)| (default, Jul  5 2016, 11:41:13)
[MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
import numpy as np
x=np.reshape(np.linspace(-0.5,0.5,1000),(1000,1))
n=np.reshape(np.arange(-10,1+10),(1,21))
X=np.exp((0+1j)*x.dot(n))
c=np.linalg.lstsq(X,np.exp(x))[0]
__main__:1: FutureWarning: `rcond` parameter will change to the default
of
machine precision times ``max(M, N)`` where M and N are the input matrix
dimensions.
To use the future default and silence this warning we advise to pass
`rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
np.linalg.norm(X.dot(c)-np.exp(x),ord=np.inf)
3.7732211514985746e-15
y=np.random.rand(10**6,1)-0.5
np.linalg.norm(np.exp((0+1j)*y.dot(n)).dot(c)-np.exp(y),ord=np.inf)
4.815880577669688e-15
"""

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm


--
This email has been checked for viruses by AVG.
https://www.avg.com

----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to