[
https://issues.apache.org/jira/browse/SPARK-21476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16154823#comment-16154823
]
Nirav Kuntar commented on SPARK-21476:
--------------------------------------
Hi [[email protected]], We have made these
changes(https://github.com/niravkuntar/spark/pull/1/files) in
ProbabilisticClassifier.scala which basically broadcast the model before
performing distributed operation on dataset. As explicit broadcasting stores
data in deserialized form, each task does not need to deserialize the model(as
[~sagraw] pointed out, deserialization was taking time). We have noticed
significant improvement in execution time of streaming job with these changes.
> RandomForest classification model not using broadcast in transform
> ------------------------------------------------------------------
>
> Key: SPARK-21476
> URL: https://issues.apache.org/jira/browse/SPARK-21476
> Project: Spark
> Issue Type: Improvement
> Components: ML
> Affects Versions: 2.2.0
> Reporter: Saurabh Agrawal
> Priority: Minor
>
> I notice significant task deserialization latency while running prediction
> with pipelines using RandomForestClassificationModel. While digging into the
> source, found that the transform method in RandomForestClassificationModel
> binds to its parent ProbabilisticClassificationModel and the only concrete
> definition that RandomForestClassificationModel provides and which is
> actually used in transform is that of predictRaw. Broadcasting is not being
> used in predictRaw.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]