> Hi,
>
> Thanks for this - yes I think I see that now. (The values do indeed
> differ by n_dim * n_samples * log(scale), but no 0.5 here.)
>
> I guess in a way the issue is that we typically evaluate point
> likelihoods, rather than e.g. integrals within some bounds of certainty
> of the measurement. If doing the latter, then the size of that 'box'
> would also vary with my scaling factor, and should compensate.
Note sure I get your point: the expectancy of the log likelihood (i.e. 
the negative differential entropy) also scales linearly with the 
dilation factor (indeed without the 1/2).
However, this has little impact in e.g. model selection problems, since 
the global scaling factor is fixed with the data, and thus is the same 
for all models tested.

Best,

Bertrand

------------------------------------------------------------------------------
Everyone hates slow websites. So do we.
Make your web apps faster with AppDynamics
Download AppDynamics Lite for free today:
http://p.sf.net/sfu/appdyn_sfd2d_oct
_______________________________________________
Scikit-learn-general mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/scikit-learn-general

Reply via email to