boneanxs opened a new pull request #4826:
URL: https://github.com/apache/hudi/pull/4826


   ## *Tips*
   - *Thank you very much for contributing to Apache Hudi.*
   - *Please review https://hudi.apache.org/contribute/how-to-contribute before 
opening a pull request.*
   
   ## What is the purpose of the pull request
   If we build with : `mvn clean install -Pspark3.1.x`, the following error 
would occur,
   
   ```bash
   [INFO] Add Source directory: 
/Users/hui.an/Documents/sourcecodes/hudi/hudi-spark-datasource/hudi-spark3/src/main/scala
   [INFO] Add Test Source directory: 
/Users/hui.an/Documents/sourcecodes/hudi/hudi-spark-datasource/hudi-spark3/src/test/scala
   [INFO] 
   [INFO] --- scala-maven-plugin:3.3.1:compile (scala-compile-first) @ 
hudi-spark3_2.12 ---
   [INFO] 
/Users/hui.an/Documents/sourcecodes/hudi/hudi-spark-datasource/hudi-spark3/src/main/scala:-1:
 info: compiling
   [INFO] Compiling 9 source files to 
/Users/hui.an/Documents/sourcecodes/hudi/hudi-spark-datasource/hudi-spark3/target/classes
 at 1644996679323
   [ERROR] 
/Users/hui.an/Documents/sourcecodes/hudi/hudi-spark-datasource/hudi-spark3/src/main/scala/org/apache/spark/sql/hudi/analysis/HoodieSpark3Analysis.scala:182:
 error: wrong number of arguments for pattern 
org.apache.spark.sql.catalyst.plans.logical.ShowPartitions(child: 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan,pattern: 
Option[org.apache.spark.sql.catalyst.analysis.PartitionSpec])
   [ERROR]       case ShowPartitions(child, specOpt, _)
   [ERROR]                          ^
   [ERROR] 
/Users/hui.an/Documents/sourcecodes/hudi/hudi-spark-datasource/hudi-spark3/src/main/scala/org/apache/spark/sql/hudi/analysis/HoodieSpark3Analysis.scala:188:
 error: wrong number of arguments for pattern 
org.apache.spark.sql.catalyst.plans.logical.TruncateTable(child: 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan,partitionSpec: 
Option[org.apache.spark.sql.catalyst.catalog.CatalogTypes.TablePartitionSpec])
   [ERROR]       case TruncateTable(child)
   [ERROR]                         ^
   [ERROR] 
/Users/hui.an/Documents/sourcecodes/hudi/hudi-spark-datasource/hudi-spark3/src/main/scala/org/apache/spark/sql/hudi/analysis/HoodieSpark3Analysis.scala:193:
 error: not found: value DropPartitions
   [ERROR]       case DropPartitions(child, specs, ifExists, purge)
   [ERROR]            ^
   [ERROR] 
/Users/hui.an/Documents/sourcecodes/hudi/hudi-spark-datasource/hudi-spark3/src/main/scala/org/apache/spark/sql/hudi/catalog/HoodieInternalV2Table.scala:101:
 error: not found: type V1Write
   [ERROR]   override def build(): V1Write = new V1Write {
   [ERROR]                         ^
   [ERROR] 
/Users/hui.an/Documents/sourcecodes/hudi/hudi-spark-datasource/hudi-spark3/src/main/scala/org/apache/spark/sql/hudi/catalog/HoodieInternalV2Table.scala:101:
 error: not found: type V1Write
   [ERROR]   override def build(): V1Write = new V1Write {
   [ERROR]                                       ^
   [ERROR] 
/Users/hui.an/Documents/sourcecodes/hudi/hudi-spark-datasource/hudi-spark3/src/main/scala/org/apache/spark/sql/hudi/catalog/HoodieStagedTable.scala:26:
 error: object V1Write is not a member of package 
org.apache.spark.sql.connector.write
   [ERROR] import org.apache.spark.sql.connector.write.{LogicalWriteInfo, 
V1Write, WriteBuilder}
   [ERROR]        ^
   [ERROR] 
/Users/hui.an/Documents/sourcecodes/hudi/hudi-spark-datasource/hudi-spark3/src/main/scala/org/apache/spark/sql/hudi/catalog/HoodieStagedTable.scala:88:
 error: not found: type V1Write
   [ERROR]     override def build(): V1Write = new V1Write {
   [ERROR]                           ^
   [ERROR] 
/Users/hui.an/Documents/sourcecodes/hudi/hudi-spark-datasource/hudi-spark3/src/main/scala/org/apache/spark/sql/hudi/catalog/HoodieStagedTable.scala:88:
 error: not found: type V1Write
   [ERROR]     override def build(): V1Write = new V1Write {
   [ERROR]                                         ^
   [ERROR] 8 errors found
   ```
   
   This is because of spark3.1.2 doesn't have `V1Write` and `DropPartitions` 
etc, but as we enable profile, `spark3.1.x`, it will change property 
`spark3.version` to `spark3.1.2`. Module `hudi-spark3` should not changed its 
spark version if we enable profile `spark3.1.x`(as hudi-spark3 is the module 
that contains the code that compatible with spark 3.2.0(and above) versions).
   
   So I introduce two properties: `spark3-common.version`, `spark3.1.x.version` 
to avoid this conflict.
   ## Brief change log
   
   *(for example:)*
     - *Modify AnnotationLocation checkstyle rule in checkstyle.xml*
   
   ## Verify this pull request
   
   *(Please pick either of the following options)*
   
   This pull request is a trivial rework / code cleanup without any test 
coverage.
   
   *(or)*
   
   This pull request is already covered by existing tests, such as *(please 
describe tests)*.
   
   (or)
   
   This change added tests and can be verified as follows:
   
   *(example:)*
   
     - *Added integration tests for end-to-end.*
     - *Added HoodieClientWriteTest to verify the change.*
     - *Manually verified the change by running a job locally.*
   
   ## Committer checklist
   
    - [ ] Has a corresponding JIRA in PR title & commit
    
    - [ ] Commit message is descriptive of the change
    
    - [ ] CI is green
   
    - [ ] Necessary doc changes done or have another open PR
          
    - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to