did this code ever find its way into DualNumbers.jl? I do anticipate its
going to be quite helpful.
-Thom
On Fri, Jun 6, 2014 at 10:32 AM, Thomas Covert thom.cov...@gmail.com
wrote:
Haven't been able to try it since I'm currently travelling. I bet it will
turn out to be useful though.
Maybe. Did someone create a pull request?
— John
On Jun 22, 2014, at 5:22 PM, Thomas Covert thom.cov...@gmail.com wrote:
did this code ever find its way into DualNumbers.jl? I do anticipate its
going to be quite helpful.
-Thom
On Fri, Jun 6, 2014 at 10:32 AM, Thomas Covert
There was a PR that was prematurely merged, it's still being discussed:
https://github.com/JuliaDiff/DualNumbers.jl/pull/11
In the meantime, a generic Cholesky factorization has been proposed (
https://github.com/JuliaLang/julia/pull/7236) which would solve the
original issue but not necessarily
Any idea why type inference is failing on full(chol(A, :U)) ?
It can't be determined from the types of the arguments what the result will
be, because you could give different symbols to access different parts of
the factorization. I think this was a design choice to make it easier to
The conj in DualNumbers is a noop to avoid the problem with ctranspose.
2014-06-03 10:07 GMT+02:00 Chris Foster chris...@gmail.com:
I was imagining that ' is the traditional transpose, and I think
that's the right thing - I'm thinking of a dual number as a real
number plus a *real*
On Tue, Jun 3, 2014 at 1:13 PM, Chris Foster chris...@gmail.com wrote:
So you can compute the real part L of the Cholesky decomposition exactly as
usual. Given that you now have L, you want the lower triangular matrix M.
Because L and M are lower triangular that's actually quite easy: matrix
On Wed, Jun 4, 2014 at 2:12 AM, Chris Foster chris...@gmail.com wrote:
fiddling with Base.BLAS.dot only got me as far as a segfault so far.
Actually I think I've fixed that now in the gist and using BLAS.dot
directly is faster, though still not very impressive. According to
@time, I've still
Nice work, Chris.
Your code is about twice as fast (when N = 1000) as the code I initially
posted. I think the speed gains come from the fact that your code does all
its work on real numbers, so it only has to do one floating point operation
per operation, while my choldn works directly on
Ah right, that seems to be related to the problem. It's a little better if
L = chol(A, :U)
L = full(L)
is replaced with
L = full(chol(A, :U))
but the big improvement comes from putting in a type annotation there:
L = full(chol(A, :U)) :: typeof(A)
I'm not sure if that's the
On Tue, Jun 3, 2014 at 4:43 AM, Thomas Covert thom.cov...@gmail.com wrote:
I was hoping to find some neat linear algebra trick that would let me
compute a DualNumber cholesky factorization without having to resort to
non-LAPACK code, but I haven't found it yet. That is, I figured that I
could
Thanks for that, Chris. I also worked out a similar derivation - though I
wasn't able to prove to myself that the equation B = L*M' + M*L' has a
unique solution for M.
This feels similar to the Sylvester Equation, but its not quite the same...
On Mon, Jun 2, 2014 at 10:13 PM, Chris Foster
Ah good hint. Apparently a slightly more general version is called
the *-Sylvester Equation
http://gauss.uc3m.es/web/personal_web/fteran/papers/equation_lama_rev.pdf
and quite some work has been done, though I haven't time to dig
through the references. The big difference in your case is that
By the way, using your notation, do we want the traditional transpose of M,
or the conjugate transpose of M (i.e., -1 * M')? Wikipedia seems to
suggest that matrix operations on Dual Numbers require care similar to
complex matrices. I wonder if this makes the problem more clear (or not)?
On
13 matches
Mail list logo