CBribiescas edited a comment on pull request #32813:
URL: https://github.com/apache/spark/pull/32813#issuecomment-857390387


   To elaborate on the merge condition part, can I propose the following.  
Expose a `pruning_strategy` (both scala/pyspark) behavior which is as follows:
   - `None` - No pruning to be done
   - `class` - The behavior that exists where pruning will happen if the 
prediction is the same
   - `probability` - Which will prune if the probabilities match only (Not sure 
how likely)
   
   Default setting being either `none` or `probability`.  Of course,  adding 
unit tests and documentation for the above.
   
   Prior to implementing, does this sound good?  
   
   Also, the formatting was prescribed by the Scala style check described here: 
https://spark.apache.org/contributing.html  Should I remove it or leave?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to