dongjoon-hyun commented on a change in pull request #33630:
URL: https://github.com/apache/spark/pull/33630#discussion_r682317336



##########
File path: project/MimaExcludes.scala
##########
@@ -36,6 +36,10 @@ object MimaExcludes {
 
   // Exclude rules for 3.3.x from 3.2.0 after 3.2.0 release
   lazy val v33excludes = v32excludes ++ Seq(
+    
ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.ml.param.FloatParam.jValueEncode"),
+    
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.ml.param.FloatParam.jValueDecode"),
+    
ProblemFilters.exclude[IncompatibleMethTypeProblem]("org.apache.spark.mllib.tree.model.TreeEnsembleModel#SaveLoadV1_0.readMetadata"),
+    
ProblemFilters.exclude[IncompatibleResultTypeProblem]("org.apache.spark.sql.expressions.MutableAggregationBuffer.jsonValue")

Review comment:
       This seems to be used in our API. Is this incompatibility okay?
   
   ```
     /**
      * Initializes the given aggregation buffer, i.e. the zero value of the 
aggregation buffer.
      *
      * The contract should be that applying the merge function on two initial 
buffers should just
      * return the initial buffer itself, i.e.
      * `merge(initialBuffer, initialBuffer)` should equal `initialBuffer`.
      *
      * @since 1.5.0
      */
     def initialize(buffer: MutableAggregationBuffer): Unit
   ```
   
   cc @HyukjinKwon , @cloud-fan 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to