LuciferYang commented on PR #40925:
URL: https://github.com/apache/spark/pull/40925#issuecomment-1578026177

   @hvanhovell 
   
   I think there is a scenario here that we need to clarify:
   
   - Firstly, there may be the following definitions in the sql module
   
   ```scala
   package org.apache.spark.sql
   
   class ClassA {
     def functionA(): DataFrame = ...
   
     private[sql] def functionA(): DataFrame = ...
   }
   
   ```
   
   There are two functions with the same name `functionA` in ClassA and one is 
defined as `private[sql]`.
   
   
   - Afterwards, we may define the corresponding ClassA in the connect client 
module as follows:
   
   
   ```scala
   package org.apache.spark.sql
   
   class ClassA {
   
   }
   
   ```
   
   Then, if we run the mima check, the check results will pass even if there is 
no definition related to `functionA` in ClassA of connect client module, this 
because `private[sql] def functionA` is in `.generated-mima-member-excludes` as 
default , and this will be equivalent to we manually adding 
`ProblemFilters.exclude[Problem]("org.apache.spark.sql.ClassA.functionA")`.
   
   In this scenario, we need to ensure that the function identified as 
`private[package]` and other functions in same class use different names to 
avoid above problem.
   
   Do you think this is acceptable? Or should we keep the manual exclude?
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to