GitHub user NarineK opened a pull request:
https://github.com/apache/spark/pull/11179
[SPARK-13295] [ ML, MLlib ] AFTSurvivalRegression.AFTAggregator
improvements - Avoids creating new instances of arrays/vectors for each record
As also mentioned/marked by TODO in AFTAggregator.AFTAggregator.add(data:
AFTPoint) a new array is being created for intercept value and it is being
concatenated
with another array which contains the betas, the resulted Array is being
converted into a Dense vector which in it's turn is being converted into breeze
vector.
This is expensive and not necessarily beautiful.
I've tried to solve above mentioned problem by simple algebraic
decompositions - keeping and treating intercept independently.
Please let me know what do you think and if you have any questions.
Thanks,
Narine
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/NarineK/spark survivaloptim
Alternatively you can review and apply these changes as the patch at:
https://github.com/apache/spark/pull/11179.patch
To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:
This closes #11179
----
commit 8d443e9d7cd4b8b4cf7a4e14bec8287b7db6aff7
Author: Narine Kokhlikyan <[email protected]>
Date: 2016-02-12T02:42:08Z
Initial commit - AFTSurvivalRegression improvements
----
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]