Yeah this has gone well off-road.

ALS is not non-deterministic because of hardware errors or cosmic
rays. It's also nothing to do with floating-point round-off, or
certainly, that is not the primary source of non-determinism to
several orders of magnitude.

ALS starts from a random solution and this will result in a different
solution. The overall problem is non-convex and the process will not
necessarily converge to the same solution.

Randomness is a common feature of machine learning: centroid selection
in k-means, the 'stochastic' in SGD, random forests, etc. I don't
think the question is why randomness is useful right?

For ALS... I don't quite understand the question, what's the
alternative? certainly I have always seen it formulated in terms of a
random initial solution. You don't want to always start from the same
point because of local minima. Ideally you start from many points and
take the best solution.

On Mon, Jun 24, 2013 at 11:22 PM, Ted Dunning <ted.dunn...@gmail.com> wrote:
> This is a common chestnut that gets trotted out commonly, but I doubt that
> the effects that the OP was worried about where on the same scale.
>  Non-commutativity of FP arithmetic on doubles rarely has a very large
> effect.
>
>
> On Mon, Jun 24, 2013 at 11:17 PM, Michael Kazekin <kazm...@hotmail.com>wrote:
>
>> Any algorithm is non-deterministic because of non-deterministic behavior
>> of underlying hardware, of course :) But that's an offtop. I'm talking
>> about specific implementation of specific algorithm, and in general I'd
>> like to know that at least some very general properties of the algorithm
>> implementation conserve (and why did authors added intentional
>> non-deterministic component to implementation).

Reply via email to