The  LLLplus package https://github.com/christianpeel/LLLplus.jl provides
functions to do LLL and Seysen lattice reduction, a 'sphere decoder'  to
solve the closest vector point problem, and a VBLAST matrix decomposition
(used in multi-antenna wireless).  For a basic example try
 N = 1000;
 H = randn(N,N) + im*randn(N,N);
 @time (B,T) = lll(H);
This produces B, which is the LLL-reduced basis for H, and T, the
corresponding unimodular matrix.

I have a question for the BLAS gurus:  I can use the BLAS function
   B[:,lx] = axpy!(-mu, B[:,k], B[:,lx])
to accelerate the code
  B[:,lx]   = B[:,lx]   - mu * B[:,k]
but what I wanted to do was simply
  axpy!(-mu, B[:,k], B[:,lx])
I guess that there is some pass-by-value problem that makes the in-place
nature of axpy! not work on columns of a matrix.  Any suggestions for
improving this?  See line  47 of
https://github.com/christianpeel/LLLplus.jl/blob/master/src/lll.jl for the
original code.

thanks

-- 
[email protected]

Reply via email to