Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/5539#issuecomment-93791077
@rakeshchalasani Dropout regularization has been widely used for deep
learning, but it is still pretty experimental for other types of models.
Because of that, I think it's unlikely that this PR will be accepted for now.
There are also quite a few changes being planned for the optimization
framework, so this will likely conflict with those and be delayed. (I don't
want to waste your time.)
However, there may well be Spark users who would like to try your code, so
I'd recommend proceeding as follows:
* Make a package users can download on [http://spark-packages.org/]
* Meanwhile, if you can, run some experiments on a cluster to see how much
& in what situations dropout improves speed and/or accuracy.
* If the experiments prove successful, then we could discuss more about
porting your code into Spark itself once the optimization framework changes are
complete.
Does that sound good? If so, can you please close the PR? Once you turn
this into a package, I'd recommend posting the link on the JIRA to help others
find it. Thanks very much!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]