zhengruifeng edited a comment on issue #27519: [SPARK-30770][ML] avoid vector 
conversion in GMM.transform
URL: https://github.com/apache/spark/pull/27519#issuecomment-591219332
 
 
   Crrent Master impl and commit 
[7686e04](https://github.com/apache/spark/commit/7686e04c648384251b98c0c335c084b1f654188e),
 all need to create two vector in `logpdf`,
   while the initial commit 
[bc1586e](https://github.com/apache/spark/pull/27519/commits/bc1586eafa58748b8ae7855184d903c22c1088a4)
 only need to create one vector.
   
   All the scala tests passed in `bc1586e`, however, it will fail in the py 
side. We can see that the model coefficients are almost the same, the only 
significient difference is the `logLikelihood`. 
   
   The issue of `logLikelihood` is the same as 
https://github.com/apache/spark/pull/26735, @huaxingao had helped testing it, 
and found that if we set `maxIter>25`, then all impls will convergen to the 
same cost. 
   It looks like a little numeric perturbation (in 
https://github.com/apache/spark/pull/26735, the way to accumulate `sumWeights`; 
in `bc1586e`, the way to compute `logpdf`: `A*(x-mean) -> A*x - A*mean`) will 
cause the py test converge to `26.193922336279954` at iteration=5, so I am 
wondering if we can update the py test by setting a larger `maxIter`?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to