I applied to GSoC with an idea about allowing MLE/MPD estimation using
Mamba models. In order to do this, it would be useful to be able to do
symbolic calculus on log-pdf values (so as to get the "score", "observed
information", etc.) I'm thinking about what the right julian way to do this
would be.
Say I have a Mamba model which includes the following statement:
alpha = Stochastic(1,
(mu_alpha, s2_alpha) -> Normal(mu_alpha, sqrt(s2_alpha)),
false
)
I'd like to be able to pass a SymPy.Sym symbolic variable to the lambda on
the second line, and get a SymbolicNormal object sn; then do logpdf(sn,
alpha::SymPy.Sym) and get an expression which I could then differentiate
over any of its three variables (mu_alpha, s2_alpha, or alpha).
As far as I can tell, abusing the type hierarchy so that a call to
Normal(a::SymPy.Sym,b::SymPy.Sym) returns a SymbolicNormal (which is not a
subclass of Normal but rather a sibling class which inherits from
AbstractNormal) is probably a no-no. And making mamba users use some
GiveMeANormalHere() function instead of just the eponymous Normal()
constructor is annoying, The next option that comes to mind is to use
metaprogramming to decompile the lambda and replace the Normal() call with
a SymbolicNormal() call.... Note that this silly decompiling, replacing,
calculus, etc. could all happen just once, and actual optimization could
still be a fast loop.
Am I going way too crazy with this last idea? If not, how do you get the
(lowered?) AST for a "stabby lambda"? "code_lowered" and friends do not
seem to work, at least not in the ways I'd expect them to.
Thanks,
Jameson Quinn