Ah of course. Great explanation. So I suppose you should see desired
results with lambda = 0, although you don't generally want to set this
to 0.

On Wed, Nov 26, 2014 at 7:53 PM, Xiangrui Meng <men...@gmail.com> wrote:
> The training RMSE may increase due to regularization. Squared loss
> only represents part of the global loss. If you watch the sum of the
> squared loss and the regularization, it should be non-increasing.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to