geserdugarov opened a new pull request, #18211: URL: https://github.com/apache/hudi/pull/18211
### Describe the issue this Pull Request addresses https://github.com/apache/hudi/pull/2328 added support of bulk insert using DSv2 for Spark 3.0 at 2020. Later https://github.com/apache/hudi/pull/8076/ added support of bulk insert for insert overwrite (table) by ```java String targetFormat; targetFormat = "org.apache.hudi.spark3.internal"; records.write().format(targetFormat) .option(DataSourceInternalWriterHelper.INSTANT_TIME_OPT_KEY, instantTime) .options(opts) .options(customOpts) .options(optsOverrides) .mode(SaveMode.Append) .save(); ``` in `DatasetBulkInsertCommitActionExecutor`, which calls `org.apache.hudi.spark3.internal.DefaultSource`. https://github.com/apache/hudi/pull/13301 moved the code from hudi-spark3-common to hudi-spark-common module. And then https://github.com/apache/hudi/pull/13360 unified code paths, and switched bulk insert to call of `HoodieDatasetBulkInsertHelper::bulkInsert` without call of `org.apache.hudi.spark.internal`. So, there is no internal calls of `org.apache.hudi.spark.internal` classes anymore. ### Summary and Changelog Removes classes in `org.apache.hudi.spark.internal` that are not used anymore. ### Impact No impact. First, they are named as internal. Second, there is no `.../META-INF/services/org.apache.spark.sql.sources.DataSourceRegister` files, which calls `org.apache.hudi.spark.internal.*`. This means that users couldn't call those classes. ### Risk Level None ### Documentation Update None ### Contributor's checklist - [x] Read through [contributor's guide](https://hudi.apache.org/contribute/how-to-contribute) - [x] Enough context is provided in the sections above - [x] Adequate tests were added if applicable -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
