Github user dongjoon-hyun commented on the pull request:

    https://github.com/apache/spark/pull/11527#issuecomment-195935665
  
    Right, as you said, we can use optimizer object directly. I thought that's 
the main reason why this PR doesn't get any response from ML part committers.
    
    At the first time, what I focused was the *consistency* of API among 
algorithms. Other algorithms having `reqParam` provides in this manner and also 
maintains its own `reqParam` value . You may remember that the list of reqParam 
values I made before. So, I want to make it complete by adding two missing part.
    
    But, what I'm not sure here is Spark MLLib's direction. As you said, the 
use of optimizer is more recommended way now and in the future. I think this PR 
should be closed. If I can get some clear advice for this before closing this 
PR, I'll really be happy. :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to